SAP HANA Workload On Azure
SAP HANA Workload On Azure
Overview
Get started
Certifications
SAP HANA on Azure (Large Instances)
Overview
What is SAP HANA on Azure (Large Instances)?
Know the terms
Certification
Available SKUs for HLI
Sizing
Onboarding requirements
SAP HANA data tiering and extension nodes
Operations model and responsibilities
Compatible Operating Systems
Architecture
General architecture
Network architecture
Storage architecture
HLI supported scenarios
Infrastructure and connectivity
HLI deployment
Connecting Azure VMs to HANA Large Instances
Connecting a VNet to HANA Large Instance ExpressRoute
Additional network requirements
Install SAP HANA
Validate the configuration
Sample HANA Installation
High availability and disaster recovery
Options and considerations
Backup and restore
Principles and preparation
Disaster recovery failover procedure
Troubleshoot and monitor
Monitoring HLI
Monitoring and troubleshooting from HANA side
How to
Azure HANA Large Instances control through Azure portal
Manage BareMetal Instances through the Azure portal
HA Setup with STONITH
OS Backup for Type II SKUs
Enable Kdump for HANA Large Instances
OS Upgrade for HANA Large Instances
Setting up SMT server for SUSE Linux
HLI to Azure VM migration
Buy an SAP HANA Large Instances reservation
SAP HANA on Azure Virtual Machines
Installation of SAP HANA on Azure VMs
S/4 HANA or BW/4 HANA SAP CAL deployment guide
SAP HANA infrastructure configurations and operations on Azure
SAP HANA Azure virtual machine storage configurations
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
SAP HANA Availability in Azure Virtual Machines
SAP HANA on Azure Availability overview
SAP HANA on Azure Availability within one Azure region
SAP HANA on Azure Availability across Azure regions
Set up SAP HANA System Replication on SLES
Set up SAP HANA System Replication on RHEL
Set up SAP HANA System Replication with ANF on RHEL
Troubleshoot SAP HANA scale-out and Pacemaker on SLES
SAP HANA scale-out HSR with Pacemaker on RHEL
SAP HANA scale-out with standby node with Azure NetApp Files on SLES
SAP HANA scale-out with standby node with Azure NetApp Files on RHEL
SAP HANA backup overview
SAP HANA file level backup
SAP NetWeaver and Business One on Azure Virtual Machines
SAP workload planning and deployment checklist
Plan and implement SAP NetWeaver on Azure
Azure Storage types for SAP workload
SAP workload on Azure virtual machine supported scenarios
What SAP software is supported for Azure deployments
SAP NetWeaver Deployment guide
DBMS deployment guides for SAP workload
General Azure Virtual Machines DBMS deployment for SAP workload
SQL Server Azure Virtual Machines DBMS deployment for SAP workload
Oracle Azure Virtual Machines DBMS deployment for SAP workload
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP MaxDB, liveCache and Content Server deployment on Azure
SAP HANA Availability in Azure Virtual Machines
SAP HANA on Azure Availability overview
SAP HANA on Azure Availability within one Azure region
SAP HANA on Azure Availability across Azure regions
SAP Business One on Azure Virtual Machines
SAP IDES on Windows/SQL Server SAP CAL deployment guide
SAP LaMa connector for Azure
High Availability (HA) on Windows and Linux
Overview
High Availability Architecture
HA Architecture and Scenarios
Higher Availability Architecture and Scenarios
SAP workload configurations with Azure Availability Zones
HA on Windows with Shared Disk for (A)SCS Instance
HA on Windows with SOFS File Share for (A)SCS Instance
HA for SAP NetWeaver on Windows with Azure NetApp Files (SMB)
HA on SUSE Linux for (A)SCS Instance
HA on SUSE Linux for (A)SCS Instance with Azure NetApp Files
HA on Red Hat Enterprise Linux for (A)SCS Instance
HA on Red Hat Enterprise Linux for (A)SCS Instance with Azure NetApp Files
Azure Infrastructure Preparation
Windows with Shared Disk for (A)SCS Instance
Windows with SOFS File Share for (A)SCS Instance
High availability for NFS on Azure VMs on SLES
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
Pacemaker on SLES
Pacemaker on RHEL
Public endpoint connectivity for VMs using Azure Standard Load Balancer in SAP
high-availability scenarios
SAP Installation
Windows with Shared Disk for (A)SCS Instance
Windows with SOFS File Share for (A)SCS Instance
HA for SAP NetWeaver on Windows with Azure NetApp Files (SMB)
SUSE Linux with NFS for (A)SCS Instance
SUSE Linux with NFS for (A)SCS Instance with Azure NetApp Files
High availability for SAP NetWeaver on Red Hat Enterprise Linux
Red Hat Enterprise Linux with NFS for (A)SCS Instance with Azure NetApp Files
SAP Multi-SID
Windows with Azure Shared Disk for (A)SCS Instance
Windows with Shared Disk for (A)SCS Instance
Windows with SOFS File Share for (A)SCS Instance
SLES with Pacemaker multi-SID for A(SCS) Instance
RHEL with Pacemaker multi-SID for A(SCS) Instance
Azure Site Recovery for SAP Disaster Recovery
Azure Proximity Placement Groups for optimal network latency with SAP applications
SAP BusinessObjects Business Intelligence platform on Azure
SAP BusinessObjects BI platform planning and implementation guide on Azure
SAP BusinessObjects BI platform deployment guide for linux on Azure
Integrate Azure AD with SAP applications
Provision users from SAP SuccessFactors to Active Directory
Provision users from SAP SuccessFactors to Azure AD
Write-back users from Azure AD to SAP SuccessFactors
Provision users to SAP Cloud Platform Identity Authentication Service
Configure SSO with SAP Cloud Platform Identity Authentication Service
Configure SSO with SAP SuccessFactors
Configure SSO with SAP Analytics Cloud
Configure SSO with SAP Fiori
Configure SSO with SAP Qualtrics
Configure SSO with SAP Ariba
Configure SSO with SAP Concur Travel and Expense
Configure SSO with SAP Cloud Platform
Configure SSO with SAP NetWeaver
Configure SSO with SAP Business ByDesign
Configure SSO with SAP HANA
Configure SSO with SAP Cloud for Customer
Configure SSO with SAP Fiori Launchpad
Azure Services Integration into SAP
Use SAP HANA in Power BI Desktop
DirectQuery and SAP HANA
Use the SAP BW Connector in Power BI Desktop
Azure Data Factory offers SAP HANA and Business Warehouse data integration
Azure Monitor for SAP Solutions
Azure Monitor for SAP Solutions Overview
Azure Monitor for SAP Solutions Providers
Configure Azure Monitor for SAP Solutions - Portal
Configure Azure Monitor for SAP Solutions - Azure PowerShell
Azure Monitor for SAP Solutions FAQ
Reference
Azure CLI
Azure CLI
Azure PowerShell
Resources
Azure Roadmap
Use Azure to host and run SAP workload scenarios
12/22/2020 • 20 minutes to read • Edit Online
When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a
scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure.
With the expanded partnership between Microsoft and SAP, you can run SAP applications across development
and test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP
BI on Linux to Windows, and SAP HANA to SQL, we've got you covered.
Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host other SAP workload
scenarios, like SAP BI on Azure.
The uniqueness of Azure for SAP HANA is an offer that sets Azure apart. To enable hosting more memory and
CPU resource-demanding SAP scenarios that involve SAP HANA, Azure offers the use of customer-dedicated
bare-metal hardware. Use this solution to run SAP HANA deployments that require up to 24 TB (120 TB scale-out)
of memory for S/4HANA or other SAP HANA workload.
Hosting SAP workload scenarios in Azure also can create requirements of identity integration and single sign-on.
This situation can occur when you use Azure Active Directory (Azure AD) to connect different SAP components
and SAP software-as-a-service (SaaS) or platform-as-a-service (PaaS) offers. A list of such integration and single
sign-on scenarios with Azure AD and SAP entities is described and documented in the section "Azure AD SAP
identity integration and single sign-on."
Change Log
12/21/2020: Add new certifications to SKUs of HANA Large Instances in Available SKUs for HLI
12/12/2020: Added pointer to SAP note clarifying details on Oracle Enterprise Linux support by SAP to What
SAP software is supported for Azure deployments
11/26/2020: Adapt SAP HANA Azure virtual machine storage configurations and Azure Storage types for SAP
workload to changed single VM SLAs
11/05/2020: Changing link to new SAP note about HANA supported file system types in SAP HANA Azure
virtual machine storage configurations
10/26/2020: Changing some tables for Azure premium storage configuration to clarify provisioned versus
burst throughput in SAP HANA Azure virtual machine storage configurations
10/22/2020: Change in HA for SAP NW on Azure VMs on SLES for SAP applications, HA for SAP NW on Azure
VMs on SLES with ANF, HA for SAP NW on Azure VMs on RHEL for SAP applications and HA for SAP NW on
Azure VMs on RHEL with ANF to adjust the recommendation for net.ipv4.tcp_keepalive_time
10/16/2020: Change in HA of IBM Db2 LUW on Azure VMs on SLES with Pacemaker, HA for SAP NW on Azure
VMs on RHEL for SAP applications, HA of IBM Db2 LUW on Azure VMs on RHEL, HA for SAP NW on Azure
VMs on RHEL multi-SID guide, HA for SAP NW on Azure VMs on RHEL with ANF, HA for SAP NW on Azure
VMs on SLES for SAP applications, HA for SAP NNW on Azure VMs on SLES multi-SID guide, HA for SAP NW
on Azure VMs on SLES with ANF for SAP applications, HA for NFS on Azure VMs on SLES, HA of SAP HANA on
Azure VMs on SLES, HA for SAP HANA scale-up with ANF on RHEL, HA of SAP HANA on Azure VMs on RHEL,
SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL, Prepare Azure infrastructure for SAP
ASCS/SCS with WSFC and shared disk, multi-SID HA guide for SAP ASCS/SCS with WSFC and Azure shared
disk and multi-SID HA guide for SAP ASCS/SCS with WSFC and shared disk to add a statement that floating IP
is not supported in load-balancing scenarios on secondary IPs
10/16/2020: Adding documentation to control storage snapshots of HANA Large Instances in Backup and
restore of SAP HANA on HANA Large Instances
10/15/2020: Release of SAP BusinessObjects BI Platform on Azure documentation, SAP BusinessObjects BI
platform planning and implementation guide on Azure and SAP BusinessObjects BI platform deployment
guide for linux on Azure
10/05/2020: Release of SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL configuration guide
09/30/2020: Change in High availability of SAP HANA on Azure VMs on RHEL, HA for SAP HANA scale-up
with ANF on RHEL and Setting up Pacemaker on RHEL in Azure to adapt the instructions for RHEL 8.1
09/29/2020: Making restrictions and recommendations around usage of PPG more obvious in the article
Azure proximity placement groups for optimal network latency with SAP applications
09/28/2020: Adding a new storage operation guide for SAP HANA using Azure NetApp Files with the
document NFS v4.1 volumes on Azure NetApp Files for SAP HANA
09/23/2020: Add new certified SKUs for HLI in Available SKUs for HLI
09/20/2020: Changes in documents Considerations for Azure Virtual Machines DBMS deployment for SAP
workload, SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver, Azure Virtual Machines
Oracle DBMS deployment for SAP workload, IBM Db2 Azure Virtual Machines DBMS deployment for SAP
workload to adapt to new configuration suggestion that recommend separation of DBMS binaries and SAP
binaries into different Azure disks. Also adding Ultra disk recommendations to the different guides.
09/08/2020: Change in High availability of SAP HANA on Azure VMs on SLES to clarify stonith definitions
09/03/2020: Change in SAP HANA Azure virtual machine storage configurations to adapt to minimal 2 IOPS
per 1 GB capacity with Ultra disk
09/02/2020: Change in Available SKUs for HLI to get more transparent in what SKUs are HANA certified
August 25, 2020: Change in HA for SAP NW on Azure VMs on SLES with ANF to fix typo
August 25, 2020: Change in HA guide for SAP ASCS/SCS with WSFC and shared disk, Prepare Azure
infrastructure for SAP ASCS/SCS with WSFC and shared disk and Install SAP NW HA with WSFC and shared
disk to introduce the option of using Azure shared disk and document SAP ERS2 architecture
August 25, 2020: Release of multi-SID HA guide for SAP ASCS/SCS with WSFC and Azure shared disk
August 25, 2020: Change in HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB), Prepare
Azure infrastructure for SAP ASCS/SCS with WSFC and file share, multi-SID HA guide for SAP ASCS/SCS with
WSFC and shared disk and multi-SID HA guide for SAP ASCS/SCS with WSFC and SOFS file share as a result
of the content updates and restructuring in the HA guides for SAP ASCS/SCS with WFC and shared disk
August 21, 2020: Adding new OS release into Compatible Operating Systems for HANA Large Instances as
available operating system for HLI units of type I and II
August 18, 2020: Release of HA for SAP HANA scale-up with ANF on RHEL
August 17, 2020: Add information about using Azure Site Recovery for moving SAP NetWeaver systems from
on-premises to Azure in article Azure Virtual Machines planning and implementation for SAP NetWeaver
08/14/2020: Adding disk configuration advice for Db2 in article IBM Db2 Azure Virtual Machines DBMS
deployment for SAP workload
August 11, 2020: Adding RHEL 7.6 into Compatible Operating Systems for HANA Large Instances as available
operating system for HLI units of type I
August 10, 2020: Introducing cost conscious SAP HANA storage configuration in SAP HANA Azure virtual
machine storage configurations and making some updates to SAP workloads on Azure: planning and
deployment checklist
August 04, 2020: Change in Setting up Pacemaker on SLES in Azure and Setting up Pacemaker on RHEL in
Azure to emphasize the importance of reliable name resolution for Pacemaker clusters
August 04, 2020: Change in SAP NW HA on WFCS with file share, SAP NW HA on WFCS with shared disk, HA
for SAP NW on Azure VMs, HA for SAP NW on Azure VMs on SLES, HA for SAP NW on Azure VMs on SLES
with ANF, HA for SAP NW on Azure VMs on SLES multi-SID guide, High availability for SAP NetWeaver on
Azure VMs on RHEL, HA for SAP NW on Azure VMs on RHEL with ANF and HA for SAP NW on Azure VMs on
RHEL multi-SID guide to clarify the use of parameter enque/encni/set_so_keepalive
July 23, 2020: Added the Save on SAP HANA Large Instances with an Azure reservation article explaining what
you need to know before you buy an SAP HANA Large Instances reservation and how to make the purchase
July 16, 2020: Describe how to use Azure PowerShell to install new VM Extension for SAP in the Deployment
Guide
July 04,2020: Release of Azure monitor for SAP solutions(preview)
July 01, 2020: Suggesting less expensive storage configuration based on Azure premium storage burst
functionality in document SAP HANA Azure virtual machine storage configurations
June 24, 2020: Change in Setting up Pacemaker on SLES in Azure to release new improved Azure Fence Agent
and more resilient STONITH configuration for devices, based on Azure Fence Agent
June 24, 2020: Change in Setting up Pacemaker on RHEL in Azure to release more resilient STONITH
configuration
June 23, 2020: Changes to Azure Virtual Machines planning and implementation for SAP NetWeaver guide
and introduction of Azure Storage types for SAP workload guide
06/22/2020: Add installation steps for new VM Extension for SAP to the Deployment Guide
June 16, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
to add a link to SUSE Public Cloud Infrastructure 101 documentation
June 10, 2020: Adding new HLI SKUs into Available SKUs for HLI and SAP HANA (Large Instances) storage
architecture
May 21, 2020: Change in Setting up Pacemaker on SLES in Azure and Setting up Pacemaker on RHEL in Azure
to add a link to Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
May 19, 2020: Add important message not to use root volume group when using LVM for HANA related
volumes in SAP HANA Azure virtual machine storage configurations
May 19, 2020: Add new supported OS for HANA Large Instance Type II in [Compatible Operating Systems for
HANA Large Instances](/- azure/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-
instance)
May 12, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
to update links and add information for 3rd party firewall configuration
May 11, 2020: Change in High availability of SAP HANA on Azure VMs on SLES to set resource stickiness to 0
for the netcat resource, as that leads to more streamlined failover
May 05, 2020: Changes in Azure Virtual Machines planning and implementation for SAP NetWeaver to
express that Gen2 deployments are available for Mv1 VM family
April 24, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with ANF on SLES, in SAP
HANA scale-out with standby node on Azure VMs with ANF on RHEL, High availability for SAP NetWeaver on
Azure VMs on SLES with ANF and High availability for SAP NetWeaver on Azure VMs on RHEL with ANF to
add clarification that the IP addresses for ANF volumes are automatically assigned
April 22, 2020: Change in High availability of SAP HANA on Azure VMs on SLES to remove meta attribute
is-managed from the instructions, as it conflicts with placing the cluster in or out of maintenance mode
April 21, 2020: Added SQL Azure DB as supported DBMS for SAP (Hybris) Commerce Platform 1811 and later
in articles What SAP software is supported for Azure deployments and SAP certifications and configurations
running on Microsoft Azure
April 16, 2020: Added SAP HANA as supported DBMS for SAP (Hybris) Commerce Platform in articles What
SAP software is supported for Azure deployments and SAP certifications and configurations running on
Microsoft Azure
April 13, 2020: Correct to exact SAP ASE release numbers in SAP ASE Azure Virtual Machines DBMS
deployment for SAP workload
April 07, 2020: Change in Setting up Pacemaker on SLES in Azure to clarify cloud-netconfig-azure instructions
April 06, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on
SLES and in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on RHEL to
remove references to NetApp TR-4435 (replaced by TR-4746)
March 31, 2020: Change in High availability of SAP HANA on Azure VMs on SLES and High availability of SAP
HANA on Azure VMs on RHEL to add instructions how to specify stripe size when creating striped volumes
March 27, 2020: Change in High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications
to align the file system mount options to NetApp TR-4746 (remove the sync mount option)
March 26, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to add
reference to NetApp TR-4746
March 26, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES for SAP applications,
High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp Files for SAP applications, High
availability for NFS on Azure VMs on SLES, High availability for SAP NetWeaver on Azure VMs on RHEL multi-
SID guide, High availability for SAP NetWeaver on Azure VMs on RHEL for SAP applications and High
availability for SAP NetWeaver on Azure VMs on RHEL with Azure NetApp Files for SAP applications to update
diagrams and clarify instructions for Azure Load Balancer backend pool creation
March 19, 2020: Major revision of document Quickstart: Manual installation of single-instance SAP HANA on
Azure Virtual Machines to Installation of SAP HANA on Azure Virtual Machines
March 17, 2020: Change in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to remove SBD
configuration setting that is no longer necessary
March 16 2020: Clarification of column certification scenario in SAP HANA IaaS certified platform in What SAP
software is supported for Azure deployments
03/11/2020: Change in SAP workload on Azure virtual machine supported scenarios to clarify multiple
databases per DBMS instance support
March 11, 2020: Change in Azure Virtual Machines planning and implementation for SAP NetWeaver
explaining Generation 1 and Generation 2 VMs
March 10, 2020: Change in SAP HANA Azure virtual machine storage configurations to clarify real existing
throughput limits of ANF
March 09, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server for SAP applications, High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with Azure NetApp Files for SAP applications, High availability for NFS on Azure VMs on SUSE Linux
Enterprise Server, Setting up Pacemaker on SUSE Linux Enterprise Server in Azure, High availability of IBM
Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker, High availability of SAP HANA on
Azure VMs on SUSE Linux Enterprise Server and High availability for SAP NetWeaver on Azure VMs on SLES
multi-SID guide to update cluster resources with resource agent azure-lb
March 05, 2020: Structure changes and content changes for Azure Regions and Azure Virtual machines in
Azure Virtual Machines planning and implementation for SAP NetWeaver
03/03/2020: Change in High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications to
change to more efficient ANF volume layout
March 01, 2020: Reworked Backup guide for SAP HANA on Azure Virtual Machines to include Azure Backup
service. Reduced and condensed content in SAP HANA Azure Backup on file level and deleted a third
document dealing with backup through disk snapshot. Content gets handled in Backup guide for SAP HANA
on Azure Virtual Machines
February 27, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications, High
availability for SAP NW on Azure VMs on SLES with ANF for SAP applications and High availability for SAP
NetWeaver on Azure VMs on SLES multi-SID guide to adjust "on fail" cluster parameter
February 26, 2020: Change in SAP HANA Azure virtual machine storage configurations to clarify file system
choice for HANA on Azure
February 26, 2020: Change in High availability architecture and scenarios for SAP to include the link to the HA
for SAP NetWeaver on Azure VMs on RHEL multi-SID guide
February 26, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications, High
availability for SAP NW on Azure VMs on SLES with ANF for SAP applications, Azure VMs high availability for
SAP NetWeaver on RHEL and Azure VMs high availability for SAP NetWeaver on RHEL with Azure NetApp
Files to remove the statement that multi-SID ASCS/ERS cluster is not supported
February 26, 2020: Release of High availability for SAP NetWeaver on Azure VMs on RHEL multi-SID guide to
add a link to the SUSE multi-SID cluster guide
02/25/2020: Change in High availability architecture and scenarios for SAP to add links to newer HA articles
February 25, 2020: Change in High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise
Server with Pacemaker to point to document that describes access to public endpoint with Standard Azure
Load balancer
February 21, 2020: Complete revision of the article SAP ASE Azure Virtual Machines DBMS deployment for
SAP workload
February 21, 2020: Change in SAP HANA Azure virtual machine storage configuration to represent new
recommendation in stripe size for /hana/data and adding setting of I/O scheduler
February 21, 2020: Changes in HANA Large Instance documents to represent newly certified SKUs of S224
and S224m
February 21, 2020: Change in Azure VMs high availability for SAP NetWeaver on RHEL and Azure VMs high
availability for SAP NetWeaver on RHEL with Azure NetApp Files to adjust the cluster constraints for enqueue
server replication 2 architecture (ENSA2)
February 20, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to
add a link to the SUSE multi-SID cluster guide
February 13, 2020: Changes to Azure Virtual Machines planning and implementation for SAP NetWeaver to
implement links to new documents
February 13, 2020: Added new document SAP workload on Azure virtual machine supported scenario
February 13, 2020: Added new document What SAP software is supported for Azure deployment
February 13, 2020: Change in High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux
Server to point to document that describes access to public endpoint with Standard Azure Load balancer
February 13, 2020: Add the new VM types to SAP certifications and configurations running on Microsoft Azure
February 13, 2020: Add new SAP support notes SAP workloads on Azure: planning and deployment checklist
February 13, 2020: Change in Azure VMs high availability for SAP NetWeaver on RHEL and Azure VMs high
availability for SAP NetWeaver on RHEL with Azure NetApp Files to align the cluster resources timeouts to the
Red Hat timeout recommendations
February 11, 2020: Release of SAP HANA on Azure Large Instance migration to Azure Virtual Machines
February 07, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA
scenarios to update sample NSG screenshot
February 03, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications and
High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications to remove the warning
about using dash in the host names of cluster nodes on SLES
January 28, 2020: Change in High availability of SAP HANA on Azure VMs on RHEL to align the SAP HANA
cluster resources timeouts to the Red Hat timeout recommendations
January 17, 2020: Change in Azure proximity placement groups for optimal network latency with SAP
applications to change the section of moving existing VMs into a proximity placement group
January 17, 2020: Change in SAP workload configurations with Azure Availability Zones to point to procedure
that automates measurements of latency between Availability Zones
January 16, 2020: Change in How to install and configure SAP HANA (Large Instances) on Azure to adapt OS
releases to HANA IaaS hardware directory
January 16, 2020: Changes in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to
add instructions for SAP systems, using enqueue server 2 architecture (ENSA2)
January 10, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files
on SLES and in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on RHEL to add
instructions on how to make nfs4_disable_idmapping changes permanent.
January 10, 2020: Changes in High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp
Files for SAP applications and in Azure Virtual Machines high availability for SAP NetWeaver on RHEL with
Azure NetApp Files for SAP applications to add instructions how to mount Azure NetApp Files NFSv4 volumes.
December 23, 2019: Release of High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide
December 18, 2019: Release of SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files
on RHEL
SAP certifications and configurations running on
Microsoft Azure
12/22/2020 • 3 minutes to read • Edit Online
SAP and Microsoft have a long history of working together in a strong partnership that has mutual benefits for
their customers. Microsoft is constantly updating its platform and submitting new certification details to SAP in
order to ensure Microsoft Azure is the best platform on which to run your SAP workloads. The following tables
outline Azure supported configurations and list of growing SAP certifications. This list is an overview list that
might deviate here and there from the official SAP lists. How to get to the detailed data is documented in the
article What SAP software is supported for Azure deployments
SAP HANA Developer Edition (including Red Hat Enterprise Linux, SUSE Linux D-Series VM family
the HANA client software comprised of Enterprise
SQLODBC, ODBO-Windows only,
ODBC, JDBC drivers, HANA studio, and
HANA database)
Business One on HANA SUSE Linux Enterprise DS14_v2, M32ts, M32ls, M64ls, M64s
SAP HANA Certified IaaS Platforms
SAP S/4 HANA Red Hat Enterprise Linux, SUSE Linux Controlled Availability for GS5. Full
Enterprise support for M64s, M64ms, M128s,
M128ms, M64ls, M32ls, M32ts,
M208s_v2, M208ms_v2, M416s_v2,
M416ms_v2,
SAP HANA on Azure (Large instances)
SAP HANA Certified IaaS Platforms
Suite on HANA, OLTP Red Hat Enterprise Linux, SUSE Linux M64s, M64ms, M128s, M128ms,
Enterprise M64ls, M32ls, M32ts, M208s_v2,
M208ms_v2,
M416s_v2, M416ms_v2, SAP HANA on
Azure (Large instances) SAP HANA
Certified IaaS Platforms
HANA Enterprise for BW, OLAP Red Hat Enterprise Linux, SUSE Linux GS5, M64s, M64ms, M128s, M128ms,
Enterprise M64ls, M32ls, M32ts, M208s_v2,
M208ms_v2,
M416s_v2, M416ms_v2, SAP HANA on
Azure (Large instances) SAP HANA
Certified IaaS Platforms
SA P P RO DUC T SUP P O RT ED O S A Z URE O F F ERIN GS
SAP BW/4 HANA Red Hat Enterprise Linux, SUSE Linux GS5, M64s, M64ms, M128s, M128ms,
Enterprise M64ls, M32ls, M32ts, M208s_v2,
M208ms_v2,
M416s_v2, M416ms_v2, SAP HANA on
Azure (Large instances)
SAP HANA Certified IaaS Platforms
Be aware that SAP uses the term 'clustering' in SAP HANA Certified IaaS Platforms as synonym for 'scale-out' and
NOT for high availability 'clustering'
SAP Business Suite Software Windows, SUSE Linux SQL Server, Oracle A5 to A11, D11 to D14,
Enterprise, Red Hat (Windows and Oracle Linux DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle only), DB2, SAP ASE DS15_v2, GS1 to GS5,
Linux D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2
SAP Business All-in-One Windows, SUSE Linux SQL Server, Oracle A5 to A11, D11 to D14,
Enterprise, Red Hat (Windows and Oracle Linux DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle only), DB2, SAP ASE DS15_v2, GS1 to GS5,
Linux D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2
SAP NetWeaver Windows, SUSE Linux SQL Server, Oracle A5 to A11, D11 to D14,
Enterprise, Red Hat (Windows and Oracle Linux DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle only), DB2, SAP ASE DS15_v2, GS1 to GS5,
Linux D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2
SAP Business One on SQL Windows SQL Server All NetWeaver certified VM
Server types
SAP Note #928839
SAP BPC 10.01 MS SP08 Windows and Linux All NetWeaver Certified VM
types
SAP Note #2451795
SAP Hybris Commerce Windows SQL Server, Oracle All NetWeaver certified VM
Platform types
Hybris Documentation
SAP Hybris Commerce SLES 12 or more recent SAP HANA All NetWeaver certified VM
Platform types
Hybris Documentation
SAP Hybris Commerce RHEL 7 or more recent SAP HANA All NetWeaver certified VM
Platform types
[Hybris
Documentation]https://help.
sap.com/viewer/a74589c3a8
1a4a95bf51d87258c0ab15/
6.7.0.0/en-
US/8c71300f866910149b40
c88dfc0de431.html)
SAP (Hybris) Commerce Windows, SLES, or RHEL SQL Azure DB All NetWeaver certified VM
Platform 1811 and later types
Hybris Documentation
What is SAP HANA on Azure (Large Instances)?
12/22/2020 • 3 minutes to read • Edit Online
SAP HANA on Azure (Large Instances) is a unique solution to Azure. In addition to providing virtual machines
for deploying and running SAP HANA, Azure offers you the possibility to run and deploy SAP HANA on bare-
metal servers that are dedicated to you. The SAP HANA on Azure (Large Instances) solution builds on non-
shared host/server bare-metal hardware that is assigned to you. The server hardware is embedded in larger
stamps that contain compute/server, networking, and storage infrastructure. As a combination, it's HANA
tailored data center integration (TDI) certified. SAP HANA on Azure (Large Instances) offers different server
SKUs or sizes. Units can have 36 Intel CPU cores and 768 GB of memory and go up to units that have up to 480
Intel CPU cores and up to 24 TB of memory.
The customer isolation within the infrastructure stamp is performed in tenants, which looks like:
Networking : Isolation of customers within infrastructure stack through virtual networks per customer
assigned tenant. A tenant is assigned to a single customer. A customer can have multiple tenants. The
network isolation of tenants prohibits network communication between tenants in the infrastructure stamp
level, even if the tenants belong to the same customer.
Storage components : Isolation through storage virtual machines that have storage volumes assigned to
them. Storage volumes can be assigned to one storage virtual machine only. A storage virtual machine is
assigned exclusively to one single tenant in the SAP HANA TDI certified infrastructure stack. As a result,
storage volumes assigned to a storage virtual machine can be accessed in one specific and related tenant
only. They aren't visible between the different deployed tenants.
Ser ver or host : A server or host unit isn't shared between customers or tenants. A server or host deployed
to a customer, is an atomic bare-metal compute unit that is assigned to one single tenant. No hardware
partitioning or soft partitioning is used that might result in you sharing a host or a server with another
customer. Storage volumes that are assigned to the storage virtual machine of the specific tenant are
mounted to such a server. A tenant can have one to many server units of different SKUs exclusively
assigned.
Within an SAP HANA on Azure (Large Instances) infrastructure stamp, many different tenants are deployed
and isolated against each other through the tenant concepts on networking, storage, and compute level.
These bare-metal server units are supported to run SAP HANA only. The SAP application layer or workload
middle-ware layer runs in virtual machines. The infrastructure stamps that run the SAP HANA on Azure (Large
Instances) units are connected to the Azure network services backbones. In this way, low-latency connectivity
between SAP HANA on Azure (Large Instances) units and virtual machines is provided.
As of July 2019, we differentiate between two different revisions of HANA Large Instance stamps and location
of deployments:
"Revision 3" (Rev 3): Are the stamps that were made available for customer to deploy before July 2019
"Revision 4" (Rev 4): New stamp design that is deployed in close proximity to Azure VM hosts and which so
far are released in the Azure regions of:
West US2
East US
West Europe
North Europe
This document is one of several documents that cover SAP HANA on Azure (Large Instances). This document
introduces the basic architecture, responsibilities, and services provided by the solution. High-level capabilities
of the solution are also discussed. For most other areas, such as networking and connectivity, four other
documents cover details and drill-down information. The documentation of SAP HANA on Azure (Large
Instances) doesn't cover aspects of the SAP NetWeaver installation or deployments of SAP NetWeaver in VMs.
SAP NetWeaver on Azure is covered in separate documents found in the same Azure documentation container.
The different documents of HANA Large Instance guidance cover the following areas:
SAP HANA (Large Instances) overview and architecture on Azure
SAP HANA (Large Instances) infrastructure and connectivity on Azure
Install and configure SAP HANA (Large Instances) on Azure
SAP HANA (Large Instances) high availability and disaster recovery on Azure
SAP HANA (Large Instances) troubleshooting and monitoring on Azure
High availability set up in SUSE by using the STONITH
OS backup and restore for Type II SKUs of Revision 3 stamps
Save on SAP HANA Large Instances with an Azure reservation
Next steps
Refer Know the terms
Know the terms
12/22/2020 • 4 minutes to read • Edit Online
Several common definitions are widely used in the Architecture and Technical Deployment Guide. Note the
following terms and their meanings:
IaaS : Infrastructure as a service.
PaaS : Platform as a service.
SaaS : Software as a service.
SAP component : An individual SAP application, such as ERP Central Component (ECC), Business
Warehouse (BW), Solution Manager, or Enterprise Portal (EP). SAP components can be based on traditional
ABAP or Java technologies or a non-NetWeaver based application such as Business Objects.
SAP environment : One or more SAP components logically grouped to perform a business function, such
as development, quality assurance, training, disaster recovery, or production.
SAP landscape : Refers to the entire SAP assets in your IT landscape. The SAP landscape includes all
production and non-production environments.
SAP system : The combination of DBMS layer and application layer of, for example, an SAP ERP
development system, an SAP BW test system, and an SAP CRM production system. Azure deployments
don't support dividing these two layers between on-premises and Azure. An SAP system is either deployed
on-premises or it's deployed in Azure. You can deploy the different systems of an SAP landscape into either
Azure or on-premises. For example, you can deploy the SAP CRM development and test systems in Azure
while you deploy the SAP CRM production system on-premises. For SAP HANA on Azure (Large Instances),
it's intended that you host the SAP application layer of SAP systems in VMs and the related SAP HANA
instance on a unit in the SAP HANA on Azure (Large Instances) stamp.
Large Instance stamp : A hardware infrastructure stack that is SAP HANA TDI-certified and dedicated to
run SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on
SAP HANA TDI-certified hardware that's deployed in Large Instance stamps in different Azure regions. The
related term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used in
this technical deployment guide.
Cross-premises : Describes a scenario where VMs are deployed to an Azure subscription that has site-to-
site, multi-site, or Azure ExpressRoute connectivity between on-premises data centers and Azure. In
common Azure documentation, these kinds of deployments are also described as cross-premises scenarios.
The reason for the connection is to extend on-premises domains, on-premises Azure Active
Directory/OpenLDAP, and on-premises DNS into Azure. The on-premises landscape is extended to the Azure
assets of the Azure subscriptions. With this extension, the VMs can be part of the on-premises domain.
Domain users of the on-premises domain can access the servers and run services on those VMs (such as
DBMS services). Communication and name resolution between VMs deployed on-premises and Azure-
deployed VMs is possible. This scenario is typical of the way in which most SAP assets are deployed. For
more information, see Azure VPN Gateway and Create a virtual network with a site-to-site connection by
using the Azure portal.
Tenant : A customer deployed in HANA Large Instance stamp gets isolated into a tenant. A tenant is isolated
in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to
the different tenants can't see each other or communicate with each other on the HANA Large Instance
stamp level. A customer can choose to have deployments into different tenants. Even then, there is no
communication between tenants on the HANA Large Instance stamp level.
SKU categor y : For HANA Large Instance, the following two categories of SKUs are offered:
Type I class : S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and S224m
Type II class : S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, and S960m
Stamp : Defines the Microsoft internal deployment size of HANA Large Instances. Before HANA Large
Instance units can get deployed, a HANA Large Instance stamp consisting out of compute, network, and
storage racks need to be deployed in a datacenter location. Such a deployment is called a HANA Large
instance stamp or from Revision 4 (see below) on we use the alternate of term of Large Instance Row
Revision : There are two different stamp revisions for HANA Large Instance stamps. These differ in
architecture and proximity to Azure virtual machine hosts
"Revision 3" (Rev 3): is the original design that got deployed from mid of the year 2016
"Revision 4" (Rev 4): is a new design that can provide closer proximity to Azure virtual machine hosts
and with that lower network latency between Azure VMs and HANA Large Instance units
"Revision 4.2" (Rev 4.2): on existing Revision 4 DCs, resources are rebranded to BareMetal Infrastructure.
Customers can access their resources as BareMetal instances from the Azure portal.
A variety of additional resources are available on how to deploy an SAP workload in the cloud. If you plan to
execute a deployment of SAP HANA in Azure, you need to be experienced with and aware of the principles of
Azure IaaS and the deployment of SAP workloads on Azure IaaS. Before you continue, see Use SAP solutions on
Azure virtual machines for more information.
Next steps
Refer HLI Certification
Certification
12/22/2020 • 2 minutes to read • Edit Online
Besides the NetWeaver certification, SAP requires a special certification for SAP HANA to support SAP HANA on
certain infrastructures, such as Azure IaaS.
The core SAP Note on NetWeaver, and to a degree SAP HANA certification, is SAP Note #1928533 – SAP
applications on Azure: Supported products and Azure VM types.
The certification records for SAP HANA on Azure (Large Instances) units can be found in the SAP HANA certified
IaaS Platforms site.
The SAP HANA on Azure (Large Instances) types, referred to in SAP HANA certified IaaS Platforms site, provides
Microsoft and SAP customers the ability to deploy large SAP Business Suite, SAP BW, S/4 HANA, BW/4HANA, or
other SAP HANA workloads in Azure. The solution is based on the SAP-HANA certified dedicated hardware stamp
(SAP HANA tailored data center integration – TDI). If you run an SAP HANA TDI-configured solution, all SAP HANA-
based applications (such as SAP Business Suite on SAP HANA, SAP BW on SAP HANA, S4/HANA, and BW4/HANA)
works on the hardware infrastructure.
Compared to running SAP HANA in VMs, this solution has a benefit. It provides for much larger memory volumes.
To enable this solution, you need to understand the following key aspects:
The SAP application layer and non-SAP applications run in VMs that are hosted in the usual Azure hardware
stamps.
Customer on-premises infrastructure, data centers, and application deployments are connected to the cloud
platform through ExpressRoute (recommended) or a virtual private network (VPN). Active Directory and DNS
also are extended into Azure.
The SAP HANA database instance for HANA workload runs on SAP HANA on Azure (Large Instances). The Large
Instance stamp is connected into Azure networking, so software running in VMs can interact with the HANA
instance running in HANA Large Instance.
Hardware of SAP HANA on Azure (Large Instances) is dedicated hardware provided in an IaaS with SUSE Linux
Enterprise Server or Red Hat Enterprise Linux preinstalled. As with virtual machines, further updates and
maintenance to the operating system is your responsibility.
Installation of HANA or any additional components necessary to run SAP HANA on units of HANA Large
Instance is your responsibility. All respective ongoing operations and administration of SAP HANA on Azure are
also your responsibility.
In addition to the solutions described here, you can install other components in your Azure subscription that
connects to SAP HANA on Azure (Large Instances). Examples are components that enable communication with
or directly to the SAP HANA database, such as jump servers, RDP servers, SAP HANA Studio, SAP Data Services
for SAP BI scenarios, or network monitoring solutions.
As in Azure, HANA Large Instance offers support for high availability and disaster recovery functionality.
Next steps
Refer Available SKUs for HLI
Available SKUs for HANA Large Instances
12/22/2020 • 9 minutes to read • Edit Online
SAP HANA on Azure (Large Instances) service based on Revision 3 stamps only, is available in several
configurations in the Azure regions of:
Australia East
Australia Southeast
Japan East
Japan West
SAP HANA on Azure (Large Instances) service based on Revision 4 stamps is available in several configurations
in the Azure regions of:
West US 2
East US
BareMetal Infrastructure (certified for SAP HANA workloads) service based on Revision 4.2 stamps. It's available
in several configurations in the Azure regions of:
West Europe
North Europe
East US 2
South Central US
The list of available Azure Large instances that are offered lists like the following.
IMPORTANT
Be aware of the first column that represents the status of HANA certification for each of the Large Instance types in the
list. The column should correlate with the SAP HANA hardware directory for the Azure SKUs that start with the letter S
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y
CPU cores = sum of non-hyper-threaded CPU cores of the sum of the processors of the server unit.
CPU threads = sum of compute threads provided by hyper-threaded CPU cores of the sum of the processors
of the server unit. Most units are configured by default to use Hyper-Threading Technology.
Based on supplier recommendations S768m, S768xm, and S960m are not configured to use Hyper-Threading
for running SAP HANA.
IMPORTANT
The following SKUs, though still supported can't be purchased anymore: S72, S72m, S144, S144m, S192, and S192m
The specific configurations chosen are dependent on workload, CPU resources, and desired memory. It's possible
for the OLTP workload to use the SKUs that are optimized for the OLAP workload.
Two different classes of hardware divide the SKUs into:
S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and S224m, S224oo, S224om, S224ooo,
S224oom are referred to as the "Type I class" of SKUs.
All other SKUs are referred to as the "Type II class" of SKUs.
If you are interested in SKUs that are not yet listed in the SAP hardware directory, contact your Microsoft
account team to get more information.
A complete HANA Large Instance stamp isn't exclusively allocated for a single customer's use. This fact applies to
the racks of compute and storage resources connected through a network fabric deployed in Azure as well.
HANA Large Instance infrastructure, like Azure, deploys different customer "tenants" that are isolated from one
another in the following three levels:
Network : Isolation through virtual networks within the HANA Large Instance stamp.
Storage : Isolation through storage virtual machines that have storage volumes assigned and isolate storage
volumes between tenants.
Compute : Dedicated assignment of server units to a single tenant. No hard or soft partitioning of server
units. No sharing of a single server or host unit between tenants.
The deployments of HANA Large Instance units between different tenants aren't visible to each other. HANA
Large Instance units deployed in different tenants can't communicate directly with each other on the HANA Large
Instance stamp level. Only HANA Large Instance units within one tenant can communicate with each other on the
HANA Large Instance stamp level.
A deployed tenant in the Large Instance stamp is assigned to one Azure subscription for billing purposes. For a
network, it can be accessed from virtual networks of other Azure subscriptions within the same Azure
enrollment. If you deploy with another Azure subscription in the same Azure region, you also can choose to ask
for a separated HANA Large Instance tenant.
There are significant differences between running SAP HANA on HANA Large Instance and SAP HANA running
on VMs deployed in Azure:
There is no virtualization layer for SAP HANA on Azure (Large Instances). You get the performance of the
underlying bare-metal hardware.
Unlike Azure, the SAP HANA on Azure (Large Instances) server is dedicated to a specific customer. There is no
possibility that a server unit or host is hard or soft partitioned. As a result, a HANA Large Instance unit is used
as assigned as a whole to a tenant and with that to you. A reboot or shutdown of the server doesn't lead
automatically to the operating system and SAP HANA being deployed on another server. (For Type I class
SKUs, the only exception is if a server encounters issues and redeployment needs to be performed on another
server.)
Unlike Azure, where host processor types are selected for the best price/performance ratio, the processor
types chosen for SAP HANA on Azure (Large Instances) are the highest performing of the Intel E7v3 and E7v4
processor line.
Next steps
Refer HLI Sizing
Sizing
12/22/2020 • 2 minutes to read • Edit Online
Sizing for HANA Large Instance is no different than sizing for HANA in general. For existing and deployed systems
that you want to move from other RDBMS to HANA, SAP provides a number of reports that run on your existing
SAP systems. If the database is moved to HANA, these reports check the data and calculate memory requirements
for the HANA instance. For more information on how to run these reports and obtain their most recent patches or
versions, read the following SAP Notes:
SAP Note #1793345 - Sizing for SAP Suite on HANA
SAP Note #1872170 - Suite on HANA and S/4 HANA sizing report
SAP Note #2121330 - FAQ: SAP BW on HANA sizing report
SAP Note #1736976 - Sizing report for BW on HANA
SAP Note #2296290 - New sizing report for BW on HANA
For green field implementations, SAP Quick Sizer is available to calculate memory requirements of the
implementation of SAP software on top of HANA.
Memory requirements for HANA increase as data volume grows. Be aware of your current memory consumption
to help you predict what it's going to be in the future. Based on memory requirements, you then can map your
demand into one of the HANA Large Instance SKUs.
Next steps
Refer Onboarding requirements
Onboarding requirements
12/22/2020 • 3 minutes to read • Edit Online
This list assembles requirements for running SAP HANA on Azure (Larger Instances).
Microsoft Azure
An Azure subscription that can be linked to SAP HANA on Azure (Large Instances).
Microsoft Premier support contract. For specific information related to running SAP in Azure, see SAP Support
Note #2015553 – SAP on Microsoft Azure: Support prerequisites. If you use HANA Large Instance units with
384 and more CPUs, you also need to extend the Premier support contract to include Azure Rapid Response.
Awareness of the HANA Large Instance SKUs you need after you perform a sizing exercise with SAP.
Network connectivity
ExpressRoute between on-premises to Azure: To connect your on-premises data center to Azure, make sure to
order at least a 1-Gbps connection from your ISP. Connectivity between HANA Large Instance units and Azure is
using ExpressRoute technology as well. This ExpressRoute connection between the HANA Large Instance units
and Azure is included in the price of the HANA Large Instance units, including all data ingress and egress
charges for this specific ExpressRoute circuit. Therefore, you as customer, do not encounter additional costs
beyond your ExpressRoute link between on-premises and Azure.
Operating system
Licenses for SUSE Linux Enterprise Server 12 for SAP Applications.
NOTE
The operating system delivered by Microsoft isn't registered with SUSE. It isn't connected to a Subscription
Management Tool instance.
SUSE Linux Subscription Management Tool deployed in Azure on a VM. This tool provides the capability for
SAP HANA on Azure (Large Instances) to be registered and respectively updated by SUSE. (There is no
internet access within the HANA Large Instance data center.)
Licenses for Red Hat Enterprise Linux 6.7 or 7.x for SAP HANA.
NOTE
The operating system delivered by Microsoft isn't registered with Red Hat. It isn't connected to a Red Hat Subscription
Manager instance.
Red Hat Subscription Manager deployed in Azure on a VM. The Red Hat Subscription Manager provides the
capability for SAP HANA on Azure (Large Instances) to be registered and respectively updated by Red Hat.
(There is no direct internet access from within the tenant deployed on the Azure Large Instance stamp.)
SAP requires you to have a support contract with your Linux provider as well. This requirement isn't
removed by the solution of HANA Large Instance or the fact that you run Linux in Azure. Unlike with some
of the Linux Azure gallery images, the service fee is not included in the solution offer of HANA Large
Instance. It's your responsibility to fulfill the requirements of SAP regarding support contracts with the Linux
distributor.
For SUSE Linux, look up the requirements of support contracts in SAP Note #1984787 - SUSE Linux
Enterprise Server 12: Installation notes and SAP Note #1056161 - SUSE priority support for SAP
applications.
For Red Hat Linux, you need to have the correct subscription levels that include support and service
updates to the operating systems of HANA Large Instance. Red Hat recommends the Red Hat Enterprise
Linux subscription for SAP solution. Refer https://access.redhat.com/solutions/3082481.
For the support matrix of the different SAP HANA versions with the different Linux versions, see SAP Note
#2235581.
For the compatibility matrix of the operating system and HLI firmware/driver versions, refer OS Upgrade for HLI.
IMPORTANT
For Type II units only the SLES 12 SP2 OS version is supported at this point.
Database
Licenses and software installation components for SAP HANA (platform or enterprise edition).
Applications
Licenses and software installation components for any SAP applications that connect to SAP HANA and related
SAP support contracts.
Licenses and software installation components for any non-SAP applications used with SAP HANA on Azure
(Large Instances) environments and related support contracts.
Skills
Experience with and knowledge of Azure IaaS and its components.
Experience with and knowledge of how to deploy an SAP workload in Azure.
SAP HANA installation certified personal.
SAP architect skills to design high availability and disaster recovery around SAP HANA.
SAP
Expectation is that you're an SAP customer and have a support contract with SAP.
Especially for implementations of the Type II class of HANA Large Instance SKUs, consult with SAP on versions
of SAP HANA and the eventual configurations on large-sized scale-up hardware.
Next steps
Refer SAP HANA (Large Instances) architecture on Azure
Use SAP HANA data tiering and extension nodes
12/22/2020 • 2 minutes to read • Edit Online
SAP supports a data tiering model for SAP BW of different SAP NetWeaver releases and SAP BW/4HANA. For more
information about the data tiering model, see the SAP document SAP BW/4HANA and SAP BW on HANA with SAP
HANA extension nodes. With HANA Large Instance, you can use option-1 configuration of SAP HANA extension
nodes as explained in the FAQ and SAP blog documents. Option-2 configurations can be set up with the following
HANA Large Instance SKUs: S72m, S192, S192m, S384, and S384m.
When you look at the documentation, the advantage might not be visible immediately. But when you look at the
SAP sizing guidelines, you can see an advantage by using option-1 and option-2 SAP HANA extension nodes. Here
are examples:
SAP HANA sizing guidelines usually require double the amount of data volume as memory. When you run your
SAP HANA instance with the hot data, you have only 50 percent or less of the memory filled with data. The
remainder of the memory is ideally held for SAP HANA doing its work.
That means in a HANA Large Instance S192 unit with 2 TB of memory, running an SAP BW database, you only
have 1 TB as data volume.
If you use an additional SAP HANA extension node of option-1, also a S192 HANA Large Instance SKU, it gives
you an additional 2-TB capacity for data volume. In the option-2 configuration, you get an additional 4 TB for
warm data volume. Compared to the hot node, the full memory capacity of the "warm" extension node can be
used for data storing for option-1. Double the memory can be used for data volume in option-2 SAP HANA
extension node configuration.
You end up with a capacity of 3 TB for your data and a hot-to-warm ratio of 1:2 for option-1. You have 5 TB of
data and a 1:4 ratio with the option-2 extension node configuration.
The higher the data volume compared to the memory, the higher the chances are that the warm data you are
asking for is stored on disk storage.
Next steps
Refer SAP HANA (Large Instances) architecture on Azure
Operations model and responsibilities
12/22/2020 • 4 minutes to read • Edit Online
The service provided with SAP HANA on Azure (Large Instances) is aligned with Azure IaaS services. You get an
instance of a HANA Large Instance with an installed operating system that is optimized for SAP HANA. As with
Azure IaaS VMs, most of the tasks of hardening the OS, installing additional software, installing HANA, operating
the OS and HANA, and updating the OS and HANA is your responsibility. Microsoft doesn't force OS updates or
HANA updates on you.
As shown in the diagram, SAP HANA on Azure (Large Instances) is a multi-tenant IaaS offer. For the most part, the
division of responsibility is at the OS-infrastructure boundary. Microsoft is responsible for all aspects of the service
below the line of the operating system. You are responsible for all aspects of the service above the line. The OS is
your responsibility. You can continue to use most current on-premises methods you might employ for compliance,
security, application management, basis, and OS management. The systems appear as if they are in your network in
all regards.
This service is optimized for SAP HANA, so there are areas where you need to work with Microsoft to use the
underlying infrastructure capabilities for best results.
The following list provides more detail on each of the layers and your responsibilities:
Networking : All the internal networks for the Large Instance stamp running SAP HANA. Your responsibility
includes access to storage, connectivity between the instances (for scale-out and other functions), connectivity to
the landscape, and connectivity to Azure where the SAP application layer is hosted in VMs. It also includes WAN
connectivity between Azure Data Centers for disaster recovery purposes replication. All networks are partitioned by
the tenant and have quality of service applied.
Storage : The virtualized partitioned storage for all volumes needed by the SAP HANA servers, as well as for
snapshots.
Ser vers : The dedicated physical servers to run the SAP HANA DBs assigned to tenants. The servers of the Type I
class of SKUs are hardware abstracted. With these types of servers, the server configuration is collected and
maintained in profiles, which can be moved from one physical hardware to another physical hardware. Such a
(manual) move of a profile by operations can be compared a bit to Azure service healing. The servers of the Type II
class SKUs don't offer such a capability.
SDDC : The management software that is used to manage data centers as software-defined entities. It allows
Microsoft to pool resources for scale, availability, and performance reasons.
O/S : The OS you choose (SUSE Linux or Red Hat Linux) that is running on the servers. The OS images you are
supplied with were provided by the individual Linux vendor to Microsoft for running SAP HANA. You must have a
subscription with the Linux vendor for the specific SAP HANA-optimized image. You are responsible for registering
the images with the OS vendor.
From the point of handover by Microsoft, you are responsible for any further patching of the Linux operating
system. This patching includes additional packages that might be necessary for a successful SAP HANA installation
and that weren't included by the specific Linux vendor in their SAP HANA optimized OS images. (For more
information, see SAP's HANA installation documentation and SAP Notes.)
You are responsible for OS patching owing to malfunction or optimization of the OS and its drivers relative to the
specific server hardware. You also are responsible for security or functional patching of the OS.
Your responsibility also includes monitoring and capacity planning of:
CPU resource consumption.
Memory consumption.
Disk volumes related to free space, IOPS, and latency.
Network volume traffic between HANA Large Instance and the SAP application layer.
The underlying infrastructure of HANA Large Instance provides functionality for backup and restore of the OS
volume. Using this functionality is also your responsibility.
Middleware : The SAP HANA Instance, primarily. Administration, operations, and monitoring are your
responsibility. You can use the provided functionality to use storage snapshots for backup and restore and disaster
recovery purposes. These capabilities are provided by the infrastructure. Your responsibilities also include
designing high availability or disaster recovery with these capabilities, leveraging them, and monitoring to
determine whether storage snapshots executed successfully.
Data : Your data managed by SAP HANA, and other data such as backups files located on volumes or file shares.
Your responsibilities include monitoring disk free space and managing the content on the volumes. You also are
responsible for monitoring the successful execution of backups of disk volumes and storage snapshots. Successful
execution of data replication to disaster recovery sites is the responsibility of Microsoft.
Applications: The SAP application instances or, in the case of non-SAP applications, the application layer of those
applications. Your responsibilities include deployment, administration, operations, and monitoring of those
applications. You are responsible for capacity planning of CPU resource consumption, memory consumption, Azure
Storage consumption, and network bandwidth consumption within virtual networks. You also are responsible for
capacity planning for resource consumption from virtual networks to SAP HANA on Azure (Large Instances).
WANs : The connections you establish from on-premises to Azure deployments for workloads. All customers with
HANA Large Instance use Azure ExpressRoute for connectivity. This connection isn't part of the SAP HANA on Azure
(Large Instances) solution. You are responsible for the setup of this connection.
Archive : You might prefer to archive copies of data by using your own methods in storage accounts. Archiving
requires management, compliance, costs, and operations. You are responsible for generating archive copies and
backups on Azure and storing them in a compliant way.
See the SLA for SAP HANA on Azure (Large Instances).
Next steps
Refer SAP HANA (Large Instances) architecture on Azure
Compatible Operating Systems for HANA Large
Instances
12/22/2020 • 2 minutes to read • Edit Online
SLES 12 SP2 Not offered anymore S72, S72m, S96, S144, S144m, S192,
S192m, S192xm
Related Documents
To know more about Available SKUs
To know about Upgrading the Operating System
SAP HANA (Large Instances) architecture on Azure
12/22/2020 • 3 minutes to read • Edit Online
At a high level, the SAP HANA on Azure (Large Instances) solution has the SAP application layer residing in VMs.
The database layer resides on SAP TDI-configured hardware located in a Large Instance stamp in the same Azure
region that is connected to Azure IaaS.
NOTE
Deploy the SAP application layer in the same Azure region as the SAP DBMS layer. This rule is well documented in published
information about SAP workloads on Azure.
The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI-certified hardware
configuration, which is a non-virtualized, bare metal, high-performance server for the SAP HANA database. It also
provides the ability and flexibility of Azure to scale resources for the SAP application layer to meet your needs.
The architecture shown is divided into three sections:
Right : Shows an on-premises infrastructure that runs different applications in data centers so that end
users can access LOB applications, such as SAP. Ideally, this on-premises infrastructure is connected to Azure
with ExpressRoute.
Center : Shows Azure IaaS and, in this case, use of VMs to host SAP or other applications that use SAP
HANA as a DBMS system. Smaller HANA instances that function with the memory that VMs provide are
deployed in VMs together with their application layer. For more information about virtual machines, see
Virtual machines.
Azure network services are used to group SAP systems together with other applications into virtual
networks. These virtual networks connect to on-premises systems as well as to SAP HANA on Azure (Large
Instances).
For SAP NetWeaver applications and databases that are supported to run in Azure, see SAP Support Note
#1928533 – SAP applications on Azure: Supported products and Azure VM types. For documentation on
how to deploy SAP solutions on Azure, see:
Use SAP on Windows virtual machines
Use SAP solutions on Azure virtual machines
Left : Shows the SAP HANA TDI-certified hardware in the Azure Large Instance stamp. The HANA Large
Instance units are connected to the virtual networks of your Azure subscription by using the same
technology as the connectivity from on-premises into Azure. As of May 2019, an optimization got
introduced that allows to communicate between the HANA Large Instance units and the Azure VMs without
involvement of the ExpressRoute Gateway. This optimization called ExpressRoute Fast Path is displayed in
this architecture (red lines).
The Azure Large Instance stamp itself combines the following components:
Computing : Servers that are based on different generation of Intel Xeon processors that provide the
necessary computing capability and are SAP HANA certified.
Network : A unified high-speed network fabric that interconnects the computing, storage, and LAN
components.
Storage : A storage infrastructure that is accessed through a unified network fabric. The specific storage
capacity that is provided depends on the specific SAP HANA on Azure (Large Instances) configuration that is
deployed. More storage capacity is available at an additional monthly cost.
Within the multi-tenant infrastructure of the Large Instance stamp, customers are deployed as isolated tenants. At
deployment of the tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription
is the one that the HANA Large Instance is billed against. These tenants have a 1:1 relationship to the Azure
subscription. For a network, it's possible to access a HANA Large Instance unit deployed in one tenant in one Azure
region from different virtual networks that belong to different Azure subscriptions. Those Azure subscriptions
must belong to the same Azure enrollment.
As with VMs, SAP HANA on Azure (Large Instances) is offered in multiple Azure regions. To offer disaster recovery
capabilities, you can choose to opt in. Different Large Instance stamps within one geo-political region are
connected to each other. For example, HANA Large Instance Stamps in US West and US East are connected
through a dedicated network link for disaster recovery replication.
Just as you can choose between different VM types with Azure Virtual Machines, you can choose from different
SKUs of HANA Large Instance that are tailored for different workload types of SAP HANA. SAP applies memory-to-
processor-socket ratios for varying workloads based on the Intel processor generations. The following table shows
the SKU types offered.
You can find available SKUs Available SKUs for HLI.
Next steps
Refer SAP HANA (Large Instances) network architecture
SAP HANA (Large Instances) network architecture
12/22/2020 • 17 minutes to read • Edit Online
The architecture of Azure network services is a key component of the successful deployment of SAP applications
on HANA Large Instance. Typically, SAP HANA on Azure (Large Instances) deployments have a larger SAP
landscape with several different SAP solutions with varying sizes of databases, CPU resource consumption, and
memory utilization. It's likely that not all IT systems are located in Azure already. Your SAP landscape is often
hybrid as well from a DBMS point and SAP application point of view using a mixture of NetWeaver, and S/4HANA
and SAP HANA and other DBMS. Azure offers different services that allow you to run the different DBMS,
NetWeaver, and S/4HANA systems in Azure. Azure also offers you network technology to make Azure look like a
virtual data center to your on-premises software deployments
Unless your complete IT systems are hosted in Azure. Azure networking functionality is used to connect the on-
premises world with your Azure assets to make Azure look like a virtual datacenter of yours. The Azure network
functionality used is:
Azure virtual networks are connected to the ExpressRoute circuit that connects to your on-premises network
assets.
An ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or
higher. This minimal bandwidth allows adequate bandwidth for the transfer of data between on-premises
systems and systems that run on VMs. It also allows adequate bandwidth for connection to Azure systems from
on-premises users.
All SAP systems in Azure are set up in virtual networks to communicate with each other.
Active Directory and DNS hosted on-premises are extended into Azure through ExpressRoute from on-
premises, or are running complete in Azure.
For the specific case of integrating HANA Large Instances into the Azure data center network fabric, Azure
ExpressRoute technology is used as well
NOTE
Only one Azure subscription can be linked to only one tenant in a HANA Large Instance stamp in a specific Azure region.
Conversely, a single HANA Large Instance stamp tenant can be linked to only one Azure subscription. This requirement is
consistent with other billable objects in Azure.
If SAP HANA on Azure (Large Instances) is deployed in multiple different Azure regions, a separate tenant is
deployed in the HANA Large Instance stamp. You can run both under the same Azure subscription as long as these
instances are part of the same SAP landscape.
IMPORTANT
Only the Azure Resource Manager deployment method is supported with SAP HANA on Azure (Large Instances).
NOTE
The maximum throughput you can achieve with a ExpressRoute gateway is 10 Gbps by using an ExpressRoute connection.
Copying files between a VM that resides in a virtual network and a system on-premises (as a single copy stream) doesn't
achieve the full throughput of the different gateway SKUs. To leverage the complete bandwidth of the ExpressRoute gateway,
use multiple streams. Or you must copy different files in parallel streams of a single file.
IMPORTANT
Given the overall network traffic between the SAP application and database layers, only the HighPerformance or
UltraPerformance gateway SKUs for virtual networks are supported for connecting to SAP HANA on Azure (Large Instances).
For HANA Large Instance Type II SKUs, only the UltraPerformance gateway SKU is supported as a ExpressRoute gateway.
Exceptions apply when using ExpressRoute Fast Path (see below)
For more details on how to configure ExpressRoute Fast Path, read the document Connect a virtual network to
HANA large instances.
NOTE
An UltraPerformance ExpressRoute gateway is required to have ExpressRoute Fast Path working
NOTE
To run SAP landscapes in Azure, connect to the enterprise edge router closest to the Azure region in the SAP landscape.
HANA Large Instance stamps are connected through dedicated enterprise edge router devices to minimize network latency
between VMs in Azure IaaS and HANA Large Instance stamps.
The ExpressRoute gateway for the VMs that host SAP application instances are connected to one ExpressRoute
circuit that connects to on-premises. The same virtual network is connected to a separate enterprise edge router
dedicated to connecting to Large Instance stamps. Using ExpressRoute Fast Path, the data flow from HANA Large
Instances to the SAP application layer VMs are not routed through the ExpressRoute gateway anymore and with
that reduce the network round-trip latency.
This system is a straightforward example of a single SAP system. The SAP application layer is hosted in Azure. The
SAP HANA database runs on SAP HANA on Azure (Large Instances). The assumption is that the ExpressRoute
gateway bandwidth of 2-Gbps or 10-Gbps throughput doesn't represent a bottleneck.
Dependent on the rules and restrictions, you want to apply between the different virtual networks hosting VMs of
different SAP systems, you should peer those virtual networks. For more information about virtual network
peering, see Virtual network peering.
Routing in Azure
By default deployment, three network routing considerations are important for SAP HANA on Azure (Large
Instances):
SAP HANA on Azure (Large Instances) can be accessed only through Azure VMs and the dedicated
ExpressRoute connection, not directly from on-premises. Direct access from on-premises to the HANA Large
Instance units, as delivered by Microsoft to you, isn't possible immediately. The transitive routing restrictions
are due to the current Azure network architecture used for SAP HANA Large Instance. Some administration
clients and any applications that need direct access, such as SAP Solution Manager running on-premises,
can't connect to the SAP HANA database. For exceptions check the section 'Direct Routing to HANA Large
Instances'.
If you have HANA Large Instance units deployed in two different Azure regions for disaster recovery, the
same transient routing restrictions applied in the past. In other words, IP addresses of a HANA Large
Instance unit in one region (for example, US West) were not routed to a HANA Large Instance unit deployed
in another region (for example, US East). This restriction was independent of the use of Azure network
peering across regions or cross-connecting the ExpressRoute circuits that connect HANA Large Instance
units to virtual networks. For a graphic representation, see the figure in the section "Use HANA Large
Instance units in multiple regions." This restriction, which came with the deployed architecture, prohibited
the immediate use of HANA System Replication as disaster recovery functionality. For recent changes, look
up the section 'Use HANA Large Instance units in multiple regions'.
SAP HANA on Azure (Large Instances) units have an assigned IP address from the server IP pool address
range that you submitted when requesting the HANA Large Instance deployment. For more information,
see SAP HANA (Large Instances) infrastructure and connectivity on Azure. This IP address is accessible
through the Azure subscriptions and circuit that connects Azure virtual networks to HANA Large Instances.
The IP address assigned out of that server IP pool address range is directly assigned to the hardware unit.
It's not assigned through NAT anymore, as was the case in the first deployments of this solution.
Direct Routing to HANA Large Instances
By default, the transitive routing does not work in these scenarios:
Between HANA Large Instance units and an on-premises deployment.
Between HANA Large Instance routing that are deployed in two different regions.
There are three ways to enable transitive routing in those scenarios:
A reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with Traffic Manager deployed in the
Azure virtual network that connects to HANA Large Instances and to on-premises as a virtual firewall/traffic
routing solution.
Using IPTables rules in a Linux VM to enable routing between on-premises locations and HANA Large Instance
units, or between HANA Large Instance units in different regions. The VM running IPTables needs to be
deployed in the Azure virtual network that connects to HANA Large Instances and to on-premises. The VM
needs to be sized accordingly, so, that the network throughput of the VM is sufficient for the expected network
traffic. For details on VM network bandwidth, check the article Sizes of Linux virtual machines in Azure.
Azure Firewall would be another solution to enable direct traffic between on-premises and HANA Large
instance units.
All the traffic of these solutions would be routed through an Azure virtual network and as such the traffic could be
additionally restricted by the soft appliances used or by Azure Network Security Groups, so, that certain IP
addresses or IP address ranges from on-premises could be blocked or explicitly allowed accessing HANA Large
Instances.
NOTE
Be aware that implementation and support for custom solutions involving third-party network appliances or IPTables isn't
provided by Microsoft. Support must be provided by the vendor of the component used or the integrator.
In the Azure regions where Global Reach is offered, you can request enabling the Global Reach functionality for
your ExpressRoute circuit that connects your on-premises network to the Azure virtual network that connects to
your HANA Large Instance units as well. There are some cost implications for the on-premises side of your
ExpressRoute circuit. For prices, check the prices for Global Reach Add-On. There are no additional costs for you
related to the circuit that connects the HANA Large Instance unit(s) to Azure.
IMPORTANT
In case of using Global Reach for enabling direct access between your HANA Large Instance units and on-premises assets,
the network data and control flow is not routed through Azure vir tual networks , but directly between the Microsoft
enterprise exchange routers. As a result any NSG or ASG rules, or any type of firewall, NVA, or proxy you deployed in an
Azure virtual network, are not getting touched. If you use ExpressRoute Global Reach to enable direct access from
on-premises to HANA Large instance units restrictions and permissions to access HANA large Instance
units need to be defined in firewalls on the on-premises side
C o n n e c t i n g H A N A L a r g e I n st a n c e s i n d i ffe r e n t A z u r e r e g i o n s
In the same way, as ExpressRoute Global Reach can be used for connecting on-premises to HANA Large Instance
units, it can be used to connect two HANA Large Instance tenants that are deployed for you in two different
regions. The isolation is the ExpressRoute circuits that your HANA Large Instance tenants are using to connect to
Azure in both regions. There are no additional charges for connecting two HANA Large Instance tenants that are
deployed in two different regions.
IMPORTANT
The data flow and control flow of the network traffic between the different HANA Large instance tenants will not be routed
through azure networks. As a result you can't use Azure functionality or NVAs to enforce communication restrictions
between your two HANA Large Instances tenants.
For more details on how to get ExpressRoute Global Reach enabled, read the document Connect a virtual network
to HANA large instances.
The figure shows how the different virtual networks in both regions are connected to two different ExpressRoute
circuits that are used to connect to SAP HANA on Azure (Large Instances) in both Azure regions (grey lines).
Reason for this two cross connections is to protect from an outage of the MSEEs on either side. The
communication flow between the two virtual networks in the two Azure regions is supposed to be handled over
the global peering of the two virtual networks in the two different regions (blue dotted line). The thick red line
describes the ExpressRoute Global Reach connection, which allows the HANA Large Instance units of your tenants
in two different regions to communicate with each other.
IMPORTANT
If you used multiple ExpressRoute circuits, AS Path prepending and Local Preference BGP settings should be used to ensure
proper routing of traffic.
Next steps
Refer SAP HANA (Large Instances) storage architecture
SAP HANA (Large Instances) storage architecture
12/22/2020 • 6 minutes to read • Edit Online
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on the classic
deployment model per SAP recommended guidelines. The guidelines are documented in the SAP HANA storage
requirements white paper.
The HANA Large Instance of the Type I class comes with four times the memory volume as storage volume. For the
Type II class of HANA Large Instance units, the storage isn't four times more. The units come with a volume that is
intended for storing HANA transaction log backups. For more information, see Install and configure SAP HANA
(Large Instances) on Azure.
See the following table in terms of storage allocation. The table lists the rough capacity for the different volumes
provided with the different HANA Large Instance units.
H A N A L A RGE
IN STA N C E SK U H A N A / DATA H A N A / LO G H A N A / SH A RED H A N A / LO GB A C K UP S
H A N A L A RGE
IN STA N C E SK U H A N A / DATA H A N A / LO G H A N A / SH A RED H A N A / LO GB A C K UP S
Actual deployed volumes might vary based on deployment and the tool that is used to show the volume sizes.
If you subdivide a HANA Large Instance SKU, a few examples of possible division pieces might look like:
M EM O RY PA RT IT IO N
IN GB H A N A / DATA H A N A / LO G H A N A / SH A RED H A N A / LO G/ B A C K UP
These sizes are rough volume numbers that can vary slightly based on deployment and the tools used to look at
the volumes. There also are other partition sizes, such as 2.5 TB. These storage sizes are calculated with a formula
similar to the one used for the previous partitions. The term "partitions" doesn't mean that the operating system,
memory, or CPU resources are in any way partitioned. It indicates storage partitions for the different HANA
instances you might want to deploy on one single HANA Large Instance unit.
You might need more storage. You can add storage by purchasing additional storage in 1-TB units. This additional
storage can be added as additional volume. It also can be used to extend one or more of the existing volumes. It
isn't possible to decrease the sizes of the volumes as originally deployed and mostly documented by the previous
tables. It also isn't possible to change the names of the volumes or mount names. The storage volumes previously
described are attached to the HANA Large Instance units as NFS4 volumes.
You can use storage snapshots for backup and restore and disaster recovery purposes. For more information, see
SAP HANA (Large Instances) high availability and disaster recovery on Azure.
Refer HLI supported scenarios for storage layout details for your scenario.
Run multiple SAP HANA instances on one HANA Large Instance unit
It's possible to host more than one active SAP HANA instance on HANA Large Instance units. To provide the
capabilities of storage snapshots and disaster recovery, such a configuration requires a volume set per instance.
Currently, HANA Large Instance units can be subdivided as follows:
S72, S72m, S96, S144, S192 : In increments of 256 GB, with 256 GB the smallest starting unit. Different
increments such as 256 GB and 512 GB can be combined to the maximum of the memory of the unit.
S144m and S192m : In increments of 256 GB, with 512 GB the smallest unit. Different increments such as 512
GB and 768 GB can be combined to the maximum of the memory of the unit.
Type II class : In increments of 512 GB, with the smallest starting unit of 2 TB. Different increments such as 512
GB, 1 TB, and 1.5 TB can be combined to the maximum of the memory of the unit.
Few examples of running multiple SAP HANA instances might look like the following.
SIZ ES W IT H M ULT IP L E
SK U M EM O RY SIZ E STO RA GE SIZ E DATA B A SES
IMPORTANT
In order to prevent HANA trying to grow data files beyond the 16 TB file size limit of HANA Large Instance storage, you
need to set the following parameters in the global.ini configuration file of HANA
datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285
Next steps
Refer Supported scenarios for HANA Large Instances
Supported scenarios for HANA Large Instances
12/22/2020 • 25 minutes to read • Edit Online
This article describes the supported scenarios and architecture details for HANA Large Instances (HLI).
NOTE
If your required scenario is not mentioned in this article, contact the Microsoft Service Management team to assess your
requirements. Before you set up the HLI unit, validate the design with SAP or your service implementation partner.
Overview
HANA Large Instances supports a variety of architectures to help you accomplish your business requirements.
The following sections cover the architectural scenarios and their configuration details.
The derived architecture design is purely from an infrastructure perspective, and you must consult SAP or your
implementation partners for the HANA deployment. If your scenarios are not listed in this article, contact the
Microsoft account team to review the architecture and derive a solution for you.
NOTE
These architectures are fully compliant with Tailored Data Integration (TDI) design and supported by SAP.
This article describes the details of the two components in each supported architecture:
Ethernet
Storage
Ethernet
Each provisioned server comes preconfigured with sets of Ethernet interfaces. The Ethernet interfaces configured
on each HLI unit are categorized into four types:
A : Used for or by client access.
B : Used for node-to-node communication. This interface is configured on all servers (irrespective of the
topology requested) but used only for scale-out scenarios.
C : Used for node-to-storage connectivity.
D : Used for node-to-iSCSI device connection for STONITH setup. This interface is configured only when an HSR
setup is requested.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
You choose the interface based on the topology that's configured on the HLI unit. For example, interface “B” is set
up for node-to-node communication, which is useful when you have a scale-out topology configured. This
interface isn't used for single node, scale-up configurations. For more information about interface usage, review
your required scenarios (later in this article).
If necessary, you can define additional NIC cards on your own. However, the configurations of existing NICs can't
be changed.
NOTE
You might find additional interfaces that are physical interfaces or bonding. You should consider only the previously
mentioned interfaces for your use case. Any others can be ignored.
The distribution for units with two assigned IP addresses should look like:
Ethernet “A” should have an assigned IP address that's within the server IP pool address range that you
submitted to Microsoft. This IP address should be maintained in the /etc/hosts directory of the OS.
Ethernet “C” should have an assigned IP address that's used for communication to NFS. This address does
not need to be maintained in the etc/hosts directory to allow instance-to-instance traffic within the tenant.
For HANA System Replication or HANA scale-out deployment, a blade configuration with two assigned IP
addresses is not suitable. If you have only two assigned IP addresses and you want to deploy such a configuration,
contact SAP HANA on Azure Service Management. They can assign you a third IP address in a third VLAN. For
HANA Large Instances units with three assigned IP addresses on three NIC ports, the following usage rules apply:
Ethernet “A” should have an assigned IP address that's outside of the server IP pool address range that you
submitted to Microsoft. This IP address should not be maintained in the etc/hosts directory of the OS.
Ethernet “B” should be maintained exclusively in the etc/hosts directory for communication between the
various instances. These are the IP addresses to be maintained in scale-out HANA configurations as the IP
addresses that HANA uses for the inter-node configuration.
Ethernet “C” should have an assigned IP address that's used for communication to NFS storage. This type
of address should not be maintained in the etc/hosts directory.
Ethernet “D” should be used exclusively for access to STONITH devices for Pacemaker. This interface is
required when you configure HANA System Replication and want to achieve auto failover of the operating
system by using an SBD-based device.
Storage
Storage is preconfigured based on the requested topology. The volume sizes and mount points vary depending
on the number of servers, the number of SKUs, and the configured topology. For more information, review your
required scenarios (later in this article). If you require more storage, you can purchase it in 1-TB increments.
NOTE
The mount point /usr/sap/<SID> is a symbolic link to the /hana/shared mount point.
Supported scenarios
The architecture diagrams in the next sections use the following notations:
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
Volume size distribution is based on the database size in memory. To learn what database sizes in memory are
supported in a multi-SID environment, see Overview and architecture.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.
NOTE
As of December 2019, this architecture is supported only for the SUSE operating system.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
STONITH: An SBD is configured for the STONITH setup. However, the use of STONITH is optional.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
STONITH: An SBD is configured for the STONITH setup. However, the use of STONITH is optional.
At the DR site: Two sets of storage volumes are required for primary and secondary node replication.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.
Ethernet
The following network interfaces are preconfigured:
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
On standby: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the HANA instance installation at the standby unit.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Ethernet
The following network interfaces are preconfigured:
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
On the DR node
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured on both the HLI units (Primary and DR):
M O UN T P O IN T USE C A SE
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
The primary node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.
Ethernet
The following network interfaces are preconfigured:
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “PROD Instance at DR site”) for the
production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The primary node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD DR instance”) for the
production HANA instance installation at the DR HLI unit.
The primary site node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD DR instance”) for the
production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The primary site node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE
Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE
On the DR node
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured for the production HANA instance installation at
the DR HLI unit.
The primary site node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.
Next steps
Infrastructure and connectivity for HANA Large Instances
High availability and disaster recovery for HANA Large Instances
SAP HANA (large instances) deployment
12/22/2020 • 2 minutes to read • Edit Online
This article assumes that you've completed your purchase of SAP HANA on Azure (large instances) from
Microsoft. Before reading this article, for general background, see HANA large instances common terms and
HANA large instances SKUs.
Microsoft requires the following information to deploy HANA large instance units:
Customer name.
Business contact information (including email address and phone number).
Technical contact information (including email address and phone number).
Technical networking contact information (including email address and phone number).
Azure deployment region (for example, West US, Australia East, or North Europe).
SAP HANA on Azure (large instances) SKU (configuration).
For every Azure deployment region:
A /29 IP address range for ER-P2P connections that connect Azure virtual networks to HANA large
instances.
A /24 CIDR Block used for the HANA large instances server IP pool.
Optional when using ExpressRoute Global Reach to enable direct routing from on-premises to HANA
Large Instance units or routing between HANA Large Instance units in different Azure regions, you need
to reserve another /29 IP address range. This particular range may not overlap with any of the other IP
address ranges you defined before.
The IP address range values used in the virtual network address space attribute of every Azure virtual network
that connects to the HANA large instances.
Data for each HANA large instances system:
Desired hostname, ideally with a fully qualified domain name.
Desired IP address for the HANA large instance unit out of the Server IP pool address range. (The first
30 IP addresses in the server IP pool address range are reserved for internal use within HANA large
instances.)
SAP HANA SID name for the SAP HANA instance (required to create the necessary SAP HANA-related
disk volumes). Microsoft needs the HANA SID for creating the permissions for sidadm on the NFS
volumes. These volumes attach to the HANA large instance unit. The HANA SID is also used as one of
the name components of the disk volumes that get mounted. If you want to run more than one HANA
instance on the unit, you should list multiple HANA SIDs. Each one gets a separate set of volumes
assigned.
In the Linux OS, the sidadm user has a group ID. This ID is required to create the necessary SAP HANA-
related disk volumes. The SAP HANA installation usually creates the sapsys group, with a group ID of
1001. The sidadm user is part of that group.
In the Linux OS, the sidadm user has a user ID. This ID is required to create the necessary SAP HANA-
related disk volumes. If you're running several HANA instances on the unit, list all the sidadm users.
The Azure subscription ID for the Azure subscription to which SAP HANA on Azure HANA large instances are
going to be directly connected. This subscription ID references the Azure subscription, which is going to be
charged with the HANA large instance unit or units.
After you provide the preceding information, Microsoft provisions SAP HANA on Azure (large instances).
Microsoft sends you information to link your Azure virtual networks to HANA large instances. You can also access
the HANA large instance units.
Use the following sequence to connect to the HANA large instances after Microsoft has deployed it:
1. Connecting Azure VMs to HANA large instances
2. Connecting a VNet to HANA large instances ExpressRoute
3. Additional network requirements (optional)
Connecting Azure VMs to HANA Large Instances
12/22/2020 • 12 minutes to read • Edit Online
The article What is SAP HANA on Azure (Large Instances)? mentions that the minimal deployment of HANA Large
Instances with the SAP application layer in Azure looks like the following:
Looking closer at the Azure virtual network side, there is a need for:
The definition of an Azure virtual network into which you're going to deploy the VMs of the SAP application
layer.
The definition of a default subnet in the Azure virtual network that is really the one into which the VMs are
deployed.
The Azure virtual network that's created needs to have at least one VM subnet and one Azure ExpressRoute
virtual network gateway subnet. These subnets should be assigned the IP address ranges as specified and
discussed in the following sections.
You can use the Azure portal, PowerShell, an Azure template, or the Azure CLI to create the virtual network. (For
more information, see Create a virtual network using the Azure portal). In the following example, we look at a
virtual network that's created by using the Azure portal.
When referring to the address space in this documentation, to the address space that the Azure virtual network is
allowed to use. This address space is also the address range that the virtual network uses for BGP route
propagation. This address space can be seen here:
In the previous example, with 10.16.0.0/16, the Azure virtual network was given a rather large and wide IP address
range to use. Therefore, all the IP address ranges of subsequent subnets within this virtual network can have their
ranges within that address space. We don't usually recommend such a large address range for single virtual
network in Azure. But let's look into the subnets that are defined in the Azure virtual network:
We look at a virtual network with a first VM subnet (here called "default") and a subnet called "GatewaySubnet".
In the two previous graphics, the vir tual network address space covers both the subnet IP address range of
the Azure VM and that of the virtual network gateway.
You can restrict the vir tual network address space to the specific ranges used by each subnet. You can also
define the vir tual network address space of a virtual network as multiple specific ranges, as shown here:
In this case, the vir tual network address space has two spaces defined. They are the same as the IP address
ranges that are defined for the subnet IP address range of the Azure VM and the virtual network gateway.
You can use any naming standard you like for these tenant subnets (VM subnets). However, there must always
be one, and only one, gateway subnet for each vir tual network that connects to the SAP HANA on Azure
(Large Instances) ExpressRoute circuit. This gateway subnet has to be named "GatewaySubnet" to make
sure that the ExpressRoute gateway is properly placed.
WARNING
It's critical that the gateway subnet always be named "GatewaySubnet".
You can use multiple VM subnets and non-contiguous address ranges. These address ranges must be covered by
the vir tual network address space of the virtual network. They can be in an aggregated form. They can also be
in a list of the exact ranges of the VM subnets and the gateway subnet.
Following is a summary of the important facts about an Azure virtual network that connects to HANA Large
Instances:
You must submit the vir tual network address space to Microsoft when you're performing an initial
deployment of HANA Large Instances.
The vir tual network address space can be one larger range that covers the ranges for both the subnet IP
address range of the Azure VM and the virtual network gateway.
Or you can submit multiple ranges that cover the different IP address ranges of VM subnet IP address range(s)
and the virtual network gateway IP address range.
The defined vir tual network address space is used for BGP routing propagation.
The name of the gateway subnet must be: "GatewaySubnet" .
The address space is used as a filter on the HANA Large Instance side to allow or disallow traffic to the HANA
Large Instance units from Azure. The BGP routing information of the Azure virtual network and the IP address
ranges that are configured for filtering on the HANA Large Instance side should match. Otherwise, connectivity
issues can occur.
There are some details about the gateway subnet that are discussed later, in the section Connecting a vir tual
network to HANA Large Instance ExpressRoute.
The graphic does not show the additional IP address range(s) that are required for the optional use of ExpressRoute
Global Reach.
You can also aggregate the data that you submit to Microsoft. In that case, the address space of the Azure virtual
network only includes one space. Using the IP address ranges from the earlier example, the aggregated virtual
network address space could look like the following image:
In the example, instead of two smaller ranges that defined the address space of the Azure virtual network, we have
one larger range that covers 4096 IP addresses. Such a large definition of the address space leaves some rather
large ranges unused. Since the virtual network address space value(s) are used for BGP route propagation, usage
of the unused ranges on-premises or elsewhere in your network can cause routing issues. The graphic does not
show the additional IP address range(s) that are required for the optional use of ExpressRoute Global Reach.
We recommend that you keep the address space tightly aligned with the actual subnet address space that you use.
If needed, without incurring downtime on the virtual network, you can always add new address space values later.
IMPORTANT
Each IP address range in ER-P2P, the server IP pool, and the Azure virtual network address space must NOT overlap with one
another or with any other range that's used in your network. Each must be discrete. As the two previous graphics show, they
also can't be a subnet of any other range. If overlaps occur between ranges, the Azure virtual network might not connect to
the ExpressRoute circuit.
Next steps
Refer to Connecting a virtual network to HANA Large Instance ExpressRoute.
Connect a virtual network to HANA large instances
12/22/2020 • 7 minutes to read • Edit Online
After you've created an Azure virtual network, you can connect that network to SAP HANA on Azure large
instances. Create an Azure ExpressRoute gateway on the virtual network. This gateway enables you to link the
virtual network to the ExpressRoute circuit that connects to the customer tenant on the HANA Large Instance
stamp.
NOTE
This step can take up to 30 minutes to complete. The new gateway is created in the designated Azure subscription, and
then connected to the specified Azure virtual network.
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure
PowerShell.
If a gateway already exists, check whether it's an ExpressRoute gateway or not. If it is not an ExpressRoute gateway,
delete the gateway, and re-create it as an ExpressRoute gateway. If an ExpressRoute gateway is already established,
see the following section of this article, "Link virtual networks."
Use either the Azure portal or PowerShell to create an ExpressRoute VPN gateway connected to your virtual
network.
If you use the Azure portal, add a new Vir tual Network Gateway , and then select ExpressRoute as
the gateway type.
If you use PowerShell, first download and use the latest Azure PowerShell SDK.
The following commands create an ExpressRoute gateway. The texts preceded by a $ are user-defined variables
that should be updated with your specific information.
# These Values should already exist, update to match your environment
$myAzureRegion = "eastus"
$myGroupName = "SAP-East-Coast"
$myVNetName = "VNet01"
# These values are used to create the gateway, update for how you wish the GW components to be named
$myGWName = "VNet01GW"
$myGWConfig = "VNet01GWConfig"
$myGWPIPName = "VNet01GWPIP"
$myGWSku = "HighPerformance" # Supported values for HANA large instances are: HighPerformance or
UltraPerformance
In this example, the HighPerformance gateway SKU was used. Your options are HighPerformance or
UltraPerformance as the only gateway SKUs that are supported for SAP HANA on Azure (large instances).
IMPORTANT
For HANA large instances of the Type II class SKU, you must use the UltraPerformance Gateway SKU.
# Create a new connection between the ER Circuit and your Gateway using the Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName
NOTE
The last parameter in the command New-AzVirtualNetworkGatewayConnection, ExpressRouteGatewayBypass is a new
parameter that enables ExpressRoute Fast Path. A functionality that reduces network latency between your HANA Large
Instance units and Azure VMs. The functionality got added in May 2019. For more details, check the article SAP HANA
(Large Instances) network architecture. Make sure that you are running the latest version of PowerShell cmdlets before
running the commands.
To connect the gateway to more than one ExpressRoute circuit associated with your subscription, you might need
to run this step more than once. For example, you're likely going to connect the same virtual network gateway to
the ExpressRoute circuit that connects the virtual network to your on-premises network.
# Create a new connection between the ER Circuit and your Gateway using the Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName
It is important that you add the last parameter as displayed above to enable the ExpressRoute Fast Path
functionality
NOTE
If you want to have both cases handled, you need to supply two different /29 IP address ranges that do not overlap with
any other IP address range used so far.
Next steps
Additional network requirements for HLI
Additional network requirements for large instances
12/22/2020 • 3 minutes to read • Edit Online
You might have additional network requirements as part of a deployment of large instances of SAP HANA on
Azure.
Delete a subnet
To remove a virtual network subnet, you can use the Azure portal, PowerShell, or the Azure CLI. If your Azure
virtual network IP address range or address space was an aggregated range, there is no follow up for you with
Microsoft. (Note, however, that the virtual network is still propagating the BGP route address space that includes
the deleted subnet.) You might have defined the Azure virtual network address range or address space as multiple
IP address ranges, of which one was assigned to your deleted subnet. Be sure to delete that from your virtual
network address space. Then inform SAP HANA on Microsoft Service Management to remove it from the ranges
that SAP HANA on Azure (large instances) is allowed to communicate with.
For more information, see Delete a subnet.
Next steps
How to install and configure SAP HANA (large instances) on Azure
How to install and configure SAP HANA (Large
Instances) on Azure
12/22/2020 • 12 minutes to read • Edit Online
Before reading this article, get familiar with HANA Large Instances common terms and the HANA Large Instances
SKUs.
The installation of SAP HANA is your responsibility. You can start installing a new SAP HANA on Azure (Large
Instances) server after you establish the connectivity between your Azure virtual networks and the HANA Large
Instance unit(s).
NOTE
Per SAP policy, the installation of SAP HANA must be performed by a person who's passed the Certified SAP Technology
Associate exam, SAP HANA Installation certification exam, or who is an SAP-certified system integrator (SI).
When you're planning to install HANA 2.0, see SAP support note #2235581 - SAP HANA: Supported operating
systems to make sure that the OS is supported with the SAP HANA release you that you're installing. The
supported OS for HANA 2.0 is more restrictive than the supported OS for HANA 1.0. You also need to check
whether the OS release you are interested in is listed as supported for the particular HLI unit on this published list.
Click on the unit to get the whole details with the supported OS list of that unit.
Validate the following before you begin the HANA installation:
HLI unit(s)
Operating system configuration
Network configuration
Storage configuration
Operating system
The swap space of the delivered OS image is set to 2 GB according to the SAP support note #1999997 - FAQ: SAP
HANA memory. As a customer, if you want a different setting, you must set it yourself.
SUSE Linux Enterprise Server 12 SP1 for SAP applications is the distribution of Linux that's installed for SAP HANA
on Azure (Large Instances). This particular distribution provides SAP-specific capabilities "out of the box"
(including pre-set parameters for running SAP on SLES effectively).
See Resource library/white papers on the SUSE website and SAP on SUSE on the SAP Community Network (SCN)
for several useful resources related to deploying SAP HANA on SLES (including the set-up of high availability,
security hardening that's specific to SAP operations, and more).
Following is additional and useful SAP on SUSE-related links:
SAP HANA on SUSE Linux site
Best practices for SAP: Enqueue replication – SAP NetWeaver on SUSE Linux Enterprise 12
ClamSAP – SLES virus protection for SAP (including SLES 12 for SAP applications)
The following are SAP support notes that are applicable to implementing SAP HANA on SLES 12:
SAP support note #1944799 – SAP HANA guidelines for SLES operating system installation
SAP support note #2205917 – SAP HANA DB recommended OS settings for SLES 12 for SAP applications
SAP support note #1984787 – SUSE Linux Enterprise Server 12: installation notes
SAP support note #171356 – SAP software on Linux: General information
SAP support note #1391070 – Linux UUID solutions
Red Hat Enterprise Linux for SAP HANA is another offer for running SAP HANA on HANA Large Instances.
Releases of RHEL 7.2 and 7.3 are available and supported.
Following are additional useful SAP on Red Hat related links:
SAP HANA on Red Hat Linux site.
Following are SAP support notes that are applicable to implementing SAP HANA on Red Hat:
SAP support note #2009879 - SAP HANA guidelines for Red Hat Enterprise Linux (RHEL) operating system
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
SAP support note #1391070 – Linux UUID solutions
SAP support note #2228351 - Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or SLES
11
SAP support note #2397039 - FAQ: SAP on RHEL
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and upgrade
Time synchronization
SAP applications that are built on the SAP NetWeaver architecture are sensitive to time differences for the various
components that comprise the SAP system. SAP ABAP short dumps with the error title of
ZDATE_LARGE_TIME_DIFF are probably familiar. That's because these short dumps appear when the system time
of different servers or VMs is drifting too far apart.
For SAP HANA on Azure (Large Instances), time synchronization that's done in Azure doesn't apply to the compute
units in the Large Instance stamps. This synchronization is not applicable for running SAP applications in native
Azure VMs, because Azure ensures that a system's time is properly synchronized.
As a result, you must set up a separate time server that can be used by SAP application servers that are running
on Azure VMs and by the SAP HANA database instances that are running on HANA Large Instances. The storage
infrastructure in Large Instance stamps is time-synchronized with NTP servers.
Networking
We assume that you followed the recommendations in designing your Azure virtual networks and in connecting
those virtual networks to the HANA Large Instances, as described in the following documents:
SAP HANA (Large Instance) overview and architecture on Azure
SAP HANA (Large Instances) infrastructure and connectivity on Azure
There are some details worth mentioning about the networking of the single units. Every HANA Large Instance
unit comes with two or three IP addresses that are assigned to two or three NIC ports. Three IP addresses are used
in HANA scale-out configurations and the HANA system replication scenario. One of the IP addresses that's
assigned to the NIC of the unit is out of the server IP pool that's described in SAP HANA (Large Instances)
overview and architecture on Azure.
For more information about Ethernet details for your architecture, see the HLI supported scenarios.
Storage
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on Azure
service management through SAP recommended guidelines. These guidelines are documented in the SAP HANA
storage requirements white paper.
The rough sizes of the different volumes with the different HANA Large Instances SKUs is documented in SAP
HANA (Large Instances) overview and architecture on Azure.
The naming conventions of the storage volumes are listed in the following table:
STO RA GE USA GE M O UN T N A M E VO L UM E N A M E
The output of the command df -h on a S72m HANA Large Instance unit looks like:
The storage controller and nodes in the Large Instance stamps are synchronized to NTP servers. When you
synchronize the SAP HANA on Azure (Large Instances) units and Azure VMs against an NTP server, there should
be no significant time drift between the infrastructure and the compute units in Azure or Large Instance stamps.
To optimize SAP HANA to the storage used underneath, set the following SAP HANA configuration parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For SAP HANA 1.0 versions up to SPS12, these parameters can be set during the installation of the SAP HANA
database, as described in SAP note #2267798 - Configuration of the SAP HANA database.
You can also configure the parameters after the SAP HANA database installation by using the hdbparam
framework.
The storage used in HANA Large Instances has a file size limitation. The size limitation is 16 TB per file. Unlike in
file size limitations in the EXT3 file systems, HANA is not aware implicitly of the storage limitation enforced by the
HANA Large Instances storage. As a result HANA will not automatically create a new data file when the file size
limit of 16TB is reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors and the index
server will crash at the end.
IMPORTANT
In order to prevent HANA trying to grow data files beyond the 16 TB file size limit of HANA Large Instance storage, you
need to set the following parameters in the SAP HANA global.ini configuration file
datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285
With SAP HANA 2.0, the hdbparam framework has been deprecated. As a result, the parameters must be set by
using SQL commands. For more information, see SAP note #2399079: Elimination of hdbparam in HANA 2.
Refer to HLI supported scenarios to learn more about the storage layout for your architecture.
Next steps
Refer to HANA Installation on HLI
Install HANA on SAP HANA on Azure (Large
Instances)
12/22/2020 • 4 minutes to read • Edit Online
To install HANA on SAP HANA on Azure (Large Instances), you must first do the following:
You provide Microsoft with all the data to deploy for you on an SAP HANA Large Instance.
You receive the SAP HANA Large Instance from Microsoft.
You create an Azure virtual network that is connected to your on-premises network.
You connect the ExpressRoute circuit for HANA Large Instances to the same Azure virtual network.
You install an Azure virtual machine that you use as a jump box for HANA Large Instances.
You ensure that you can connect from the jump box to your HANA Large Instance unit, and vice versa.
You check whether all the necessary packages and patches are installed.
You read the SAP notes and documentation about HANA installation on the operating system you're using.
Make sure that the HANA release of choice is supported on the operating system release.
The next section shows an example of downloading the HANA installation packages to the jump box virtual
machine. In this case, the operating system is Windows.
2. In this example, we downloaded SAP HANA 2.0 installation packages. On the Azure jump box virtual
machine, expand the self-extracting archives into the directory as shown below.
3. As the archives are extracted, copy the directory created by the extraction (in this case, 51052030). Copy the
directory from the HANA Large Instance unit /hana/shared volume into a directory you created.
IMPORTANT
Don't copy the installation packages into the root or boot LUN, because space is limited and needs to be used by
other processes as well.
If you want to install SAP HANA by using the graphical user interface setup, the gtk2 package needs to be installed
on HANA Large Instances. To check whether it is installed, run the following command:
(In later steps, we show the SAP HANA setup with the graphical user interface.)
Go into the installation directory, and navigate into the sub directory HDB_LCM_LINUX_X86_64.
Out of that directory, start:
./hdblcmgui
At this point, you progress through a sequence of screens in which you provide the data for the installation. In this
example, we are installing the SAP HANA database server and the SAP HANA client components. Therefore, our
selection is SAP HANA Database .
Next, select among several additional components that you can install.
Here, we choose the SAP HANA Client and the SAP HANA Studio. We also install a scale-up instance. Then choose
Single-Host System .
For the installation path, use the /hana/shared directory. In the next step, you provide the locations for the HANA
data files and the HANA log files.
NOTE
The SID you specified when you defined system properties (two screens ago) should match the SID of the mount points. If
there is a mismatch, go back and adjust the SID to the value you have on the mount points.
In the next step, review the host name and eventually correct it.
In the next step, you also need to retrieve data you gave to Microsoft when you ordered the HANA Large Instance
deployment.
IMPORTANT
Provide the same System Administrator User ID and ID of User Group as you provided to Microsoft, as you order the
unit deployment. Otherwise, the installation of SAP HANA on the HANA Large Instance unit fails.
The next two screens are not shown here. They enable you to provide the password for the SYSTEM user of the
SAP HANA database, and the password for the sapadm user. The latter is used for the SAP Host Agent that gets
installed as part of the SAP HANA database instance.
After defining the password, you see a confirmation screen. check all the data listed, and continue with the
installation. You reach a progress screen that documents the installation progress, like this one:
As the installation finishes, you should see a screen like this one:
The SAP HANA instance should now be up and running, and ready for usage. You should be able to connect to it
from SAP HANA Studio. Also make sure that you check for and apply the latest updates.
Next steps
SAP HANA Large Instances high availability and disaster recovery on Azure
SAP HANA Large Instances high availability and
disaster recovery on Azure
12/22/2020 • 6 minutes to read • Edit Online
IMPORTANT
This documentation is no replacement of the SAP HANA administration documentation or SAP Notes. It's expected that
the reader has a solid understanding of and expertise in SAP HANA administration and operations, especially with the
topics of backup, restore, high availability, and disaster recovery.
It's important that you exercise steps and processes taken in your environment and with your HANA versions
and releases. Some processes described in this documentation are simplified for a better general understanding
and are not meant to be used as detailed steps for eventual operation handbooks. If you want to create
operation handbooks for your configurations, you need to test and exercise your processes and document the
processes related to your specific configurations.
High availability and disaster recovery (DR) are crucial aspects of running your mission-critical SAP HANA on
the Azure (Large Instances) server. It's important to work with SAP, your system integrator, or Microsoft to
properly architect and implement the right high-availability and disaster recovery strategies. It's also important
to consider the recovery point objective (RPO) and the recovery time objective, which are specific to your
environment.
Microsoft supports some SAP HANA high-availability capabilities with HANA Large Instances. These capabilities
include:
Storage replication : The storage system's ability to replicate all data to another HANA Large Instance
stamp in another Azure region. SAP HANA operates independently of this method. This functionality is the
default disaster recovery mechanism offered for HANA Large Instances.
HANA system replication : The replication of all data in SAP HANA to a separate SAP HANA system. The
recovery time objective is minimized through data replication at regular intervals. SAP HANA supports
asynchronous, synchronous in-memory, and synchronous modes. Synchronous mode is used only for SAP
HANA systems that are within the same datacenter or less than 100 km apart. With the current design of
HANA Large Instance stamps, HANA system replication can be used for high availability within one region
only. HANA system replication requires a third-party reverse proxy or routing component for disaster
recovery configurations into another Azure region.
Host auto-failover : A local fault-recovery solution for SAP HANA that's an alternative to HANA system
replication. If the master node becomes unavailable, you configure one or more standby SAP HANA nodes in
scale-out mode, and SAP HANA automatically fails over to a standby node.
SAP HANA on Azure (Large Instances) is offered in two Azure regions in four geopolitical areas (US, Australia,
Europe, and Japan). Two regions within a geopolitical area that host HANA Large Instance stamps are connected
to separate dedicated network circuits. These are used for replicating storage snapshots to provide disaster
recovery methods. The replication is not established by default but is set up for customers who order disaster
recovery functionality. Storage replication is dependent on the usage of storage snapshots for HANA Large
Instances. It's not possible to choose an Azure region as a DR region that is in a different geopolitical area.
The following table shows the currently supported high availability and disaster recovery methods and
combinations:
SC EN A RIO SUP P O RT ED IN H IGH AVA IL A B IL IT Y DISA ST ER REC O VERY
H A N A L A RGE IN STA N C ES O P T IO N O P T IO N C O M M EN T S
Host auto-failover: Scale- Possible with the standby Dedicated DR setup. HANA volume sets are
out (with or without taking the active role. Multipurpose DR setup. attached to all the nodes.
standby) HANA controls the role DR synchronization by DR site must have the same
including 1+1 switch. using storage replication. number of nodes.
HANA system replication Possible with primary or Dedicated DR setup. Separate set of disk
secondary setup. Multipurpose DR setup. volumes are attached to
Secondary moves to DR synchronization by each node.
primary role in a failover using storage replication. Only disk volumes of
case. DR by using HANA system secondary replica in the
HANA system replication replication is not yet production site get
and OS control failover. possible without third- replicated to the DR
party components. location.
One set of volumes is
required at the DR site.
A dedicated DR setup is where the HANA Large Instance unit in the DR site is not used for running any other
workload or non-production system. The unit is passive and is deployed only if a disaster failover is executed.
Though, this setup is not a preferred choice for many customers.
Refer HLI supported scenarios to learn storage layout and ethernet details for your architecture.
NOTE
SAP HANA MCOD deployments (multiple HANA Instances on one unit) as overlaying scenarios work with the HA and DR
methods listed in the table. An exception is the use of HANA System Replication with an automatic failover cluster based
on Pacemaker. Such a case only supports one HANA instance per unit. For SAP HANA MDC deployments, only non-
storage-based HA and DR methods work if more than one tenant is deployed. With one tenant deployed, all methods
listed are valid.
A multipurpose DR setup is where the HANA Large Instance unit on the DR site runs a non-production
workload. In case of disaster, shut down the non-production system, mount the storage-replicated (additional)
volume sets, and then start the production HANA instance. Most customers who use the HANA Large Instance
disaster recovery functionality use this configuration.
You can find more information on SAP HANA high availability in the following SAP articles:
SAP HANA High Availability Whitepaper
SAP HANA Administration Guide
SAP HANA Academy Video on SAP HANA System Replication
SAP Support Note #1999880 – FAQ on SAP HANA System Replication
SAP Support Note #2165547 – SAP HANA Back up and Restore within SAP HANA System Replication
Environment
SAP Support Note #1984882 – Using SAP HANA System Replication for Hardware Exchange with
Minimum/Zero Downtime
IMPORTANT
This article isn't a replacement for the SAP HANA administration documentation or SAP Notes. We expect that you have a
solid understanding of and expertise in SAP HANA administration and operations, especially for backup, restore, high
availability, and disaster recovery. In this article, screenshots from SAP HANA Studio are shown. Content, structure, and the
nature of the screens of SAP administration tools and the tools themselves might change from SAP HANA release to release.
It's important that you exercise steps and processes taken in your environment and with your HANA versions and
releases. Some processes described in this article are simplified for a better general understanding. They aren't
meant to be used as detailed steps for eventual operation handbooks. If you want to create operation handbooks
for your configurations, test and exercise your processes and document the processes related to your specific
configurations.
One of the most important aspects of operating databases is to protect them from catastrophic events. The cause
of these events can be anything from natural disasters to simple user errors.
Backing up a database, with the ability to restore it to any point in time, such as before someone deleted critical
data, enables restoration to a state that's as close as possible to the way it was prior to the disruption.
Two types of backups must be performed to achieve the capability to restore:
Database backups: Full, incremental, or differential backups
Transaction log backups
In addition to full-database backups performed at an application level, you can perform backups with storage
snapshots. Storage snapshots don't replace transaction log backups. Transaction log backups remain important to
restore the database to a certain point in time or to empty the logs from already committed transactions. Storage
snapshots can accelerate recovery by quickly providing a roll-forward image of the database.
SAP HANA on Azure (Large Instances) offers two backup and restore options:
Do it yourself (DIY). After you make sure that there's enough disk space, perform full database and log
backups by using one of the following disk backup methods. You can back up either directly to volumes
attached to the HANA Large Instance units or to NFS shares that are set up in an Azure virtual machine
(VM). In the latter case, customers set up a Linux VM in Azure, attach Azure Storage to the VM, and share
the storage through a configured NFS server in that VM. If you perform the backup against volumes that
directly attach to HANA Large Instance units, copy the backups to an Azure storage account. Do this after
you set up an Azure VM that exports NFS shares that are based on Azure Storage. You can also use either
an Azure Backup vault or Azure cold storage.
Another option is to use a third-party data protection tool to store the backups after they're copied to an
Azure storage account. The DIY backup option also might be necessary for data that you need to store for
longer periods of time for compliance and auditing purposes. In all cases, the backups are copied into NFS
shares represented through a VM and Azure Storage.
Infrastructure backup and restore functionality. You also can use the backup and restore functionality
that the underlying infrastructure of SAP HANA on Azure (Large Instances) provides. This option fulfills the
need for backups and fast restores. The rest of this section addresses the backup and restore functionality
that's offered with HANA Large Instances. This section also covers the relationship that backup and restore
have to the disaster recovery functionality offered by HANA Large Instances.
NOTE
The snapshot technology that's used by the underlying infrastructure of HANA Large Instances has a dependency on SAP
HANA snapshots. At this point, SAP HANA snapshots don't work in conjunction with multiple tenants of SAP HANA
multitenant database containers. If only one tenant is deployed, SAP HANA snapshots do work and you can use this
method.
IMPORTANT
Run these configuration commands with the same user context that the snapshot commands are run in. Otherwise, the
snapshot commands won't work properly.
Step 6: Get the snapshot scripts, configure the snapshots, and test the configuration and connectivity
Download the most recent version of the scripts from GitHub. The way the scripts are installed changed with
release 4.1 of the scripts. For more information, see "Enable communication with SAP HANA" in Microsoft
snapshot tools for SAP HANA on Azure.
For the exact sequence of commands, see "Easy installation of snapshot tools (default)" in Microsoft snapshot tools
for SAP HANA on Azure. We recommend the use of the default installation.
To upgrade from version 3.x to 4.1, see "Upgrade an existing install" in Microsoft snapshot tools for SAP HANA on
Azure. To uninstall the 4.1 tool set, see "Uninstallation of the snapshot tools" in Microsoft snapshot tools for SAP
HANA on Azure.
Don't forget to run the steps described in "Complete setup of snapshot tools" in Microsoft snapshot tools for SAP
HANA on Azure.
The purpose of the different scripts and files as they got installed is described in "What are these snapshot tools?"
in Microsoft snapshot tools for SAP HANA on Azure.
Before you configure the snapshot tools, make sure that you also configured HANA backup locations and settings
correctly. For more information, see "SAP HANA Configuration" in Microsoft snapshot tools for SAP HANA on
Azure.
The configuration of the snapshot tool set is described in "Config file - HANABackupCustomerDetails.txt" in
Microsoft snapshot tools for SAP HANA on Azure.
Test connectivity with SAP HANA
After you put all the configuration data into the HANABackupCustomerDetails.txt file, check whether the
configurations are correct for the HANA instance data. Use the script testHANAConnection , which is independent of
an SAP HANA scale-up or scale-out configuration.
For more information, see "Check connectivity with SAP HANA - testHANAConnection" in Microsoft snapshot tools
for SAP HANA on Azure.
Test storage connectivity
The next test step is to check the connectivity to the storage based on the data you put into the
HANABackupCustomerDetails.txt configuration file. Then run a test snapshot. Before you run the
azure_hana_backup command, you must run this test. For the sequence of commands for this test, see "Check
connectivity with storage - testStorageSnapshotConnection"" in Microsoft snapshot tools for SAP HANA on Azure.
After a successful sign-in to the storage virtual machine interfaces, the script continues with phase 2 and creates a
test snapshot. The output is shown here for a three-node scale-out configuration of SAP HANA.
If the test snapshot runs successfully with the script, you can schedule the actual storage snapshots. If it isn't
successful, investigate the problems before you move forward. The test snapshot should stay around until the first
real snapshots are done.
Step 7: Perform snapshots
When the preparation steps are finished, you can start to configure and schedule the actual storage snapshots. The
script to be scheduled works with SAP HANA scale-up and scale-out configurations. For periodic and regular
execution of the backup script, schedule the script by using the cron utility.
For the exact command syntax and functionality, see "Perform snapshot backup - azure_hana_backup" in Microsoft
snapshot tools for SAP HANA on Azure.
When the script azure_hana_backup runs, it creates the storage snapshot in the following three phases:
1. It runs an SAP HANA snapshot.
2. It runs a storage snapshot.
3. It removes the SAP HANA snapshot that was created before the storage snapshot ran.
To run the script, call it from the HDB executable folder to which it was copied.
The retention period is administered with the number of snapshots that are submitted as a parameter when you
run the script. The amount of time that's covered by the storage snapshots is a function of the period of execution,
and of the number of snapshots submitted as a parameter when the script runs.
If the number of snapshots that are kept exceeds the number that are named as a parameter in the call of the
script, the oldest storage snapshot of the same label is deleted before a new snapshot runs. The number you give
as the last parameter of the call is the number you can use to control the number of snapshots that are kept. With
this number, you also can control, indirectly, the disk space that's used for snapshots.
Snapshot strategies
The frequency of snapshots for the different types depends on whether you use the HANA Large Instance disaster
recovery functionality. This functionality relies on storage snapshots, which might require special
recommendations for the frequency and execution periods of the storage snapshots.
In the considerations and recommendations that follow, the assumption is that you do not use the disaster
recovery functionality that HANA Large Instances offers. Instead, you use the storage snapshots to have backups
and be able to provide point-in-time recovery for the last 30 days. Given the limitations of the number of
snapshots and space, consider the following requirements:
The recovery time for point-in-time recovery.
The space used.
The recovery point and recovery time objectives for potential recovery from a disaster.
The eventual execution of HANA full-database backups against disks. Whenever a full-database backup against
disks or the backint interface is performed, the execution of the storage snapshots fails. If you plan to run full-
database backups on top of storage snapshots, make sure that the execution of the storage snapshots is
disabled during this time.
The number of snapshots per volume, which is limited to 250.
If you don't use the disaster recovery functionality of HANA Large Instances, the snapshot period is less frequent.
In such cases, perform the combined snapshots on /hana/data and /hana/shared, which includes /usr/sap, in 12-
hour or 24-hour periods. Keep the snapshots for a month. The same is true for the snapshots of the log backup
volume. The execution of SAP HANA transaction log backups against the log backup volume occurs in 5-minute to
15-minute periods.
Scheduled storage snapshots are best performed by using cron. Use the same script for all backups and disaster
recovery needs. Modify the script inputs to match the various requested backup times. These snapshots are all
scheduled differently in cron depending on their execution time. It can be hourly, every 12 hours, daily, or weekly.
The following example shows a cron schedule in /etc/crontab:
In the previous example, an hourly combined snapshot covers the volumes that contain the /hana/data and
/hana/shared/SID, which includes /usr/sap, locations. Use this type of snapshot for a faster point-in-time recovery
within the past two days. There's also a daily snapshot on those volumes. So, you have two days of coverage by
hourly snapshots plus four weeks of coverage by daily snapshots. The transaction log backup volume also is
backed up daily. These backups are kept for four weeks.
As you see in the third line of crontab, the backup of the HANA transaction log is scheduled to run every 5
minutes. The start times of the different cron jobs that run storage snapshots are staggered. In this way, the
snapshots don't run all at once at a certain point in time.
In the following example, you perform a combined snapshot that covers the volumes that contain the /hana/data
and /hana/shared/SID, which includes /usr/sap, locations on an hourly basis. You keep these snapshots for two
days. The snapshots of the transaction log backup volumes run on a 5-minute basis and are kept for four hours. As
before, the backup of the HANA transaction log file is scheduled to run every 5 minutes.
The snapshot of the transaction log backup volume is performed with a 2-minute delay after the transaction log
backup has started. Under normal circumstances, the SAP HANA transaction log backup finishes within those 2
minutes. As before, the volume that contains the boot LUN is backed up once per day by a storage snapshot and is
kept for four weeks.
The following graphic illustrates the sequences of the previous example. The boot LUN is excluded.
SAP HANA performs regular writes against the /hana/log volume to document the committed changes to the
database. On a regular basis, SAP HANA writes a savepoint to the /hana/data volume. As specified in crontab, an
SAP HANA transaction log backup runs every 5 minutes.
You also see that an SAP HANA snapshot runs every hour as a result of triggering a combined storage snapshot
over the /hana/data and /hana/shared/SID volumes. After the HANA snapshot succeeds, the combined storage
snapshot runs. As instructed in crontab, the storage snapshot on the /hana/logbackup volume runs every 5
minutes, around 2 minutes after the HANA transaction log backup.
IMPORTANT
The use of storage snapshots for SAP HANA backups is valuable only when the snapshots are performed in conjunction with
SAP HANA transaction log backups. These transaction log backups need to cover the time periods between the storage
snapshots.
If you've set a commitment to users of a point-in-time recovery of 30 days, you need to:
Access a combined storage snapshot over /hana/data and /hana/shared/SID that's 30 days old, in extreme
cases.
Have contiguous transaction log backups that cover the time between any of the combined storage snapshots.
So, the oldest snapshot of the transaction log backup volume needs to be 30 days old. This isn't the case if you
copy the transaction log backups to another NFS share that's located on Azure Storage. In that case, you might
pull old transaction log backups from that NFS share.
To benefit from storage snapshots and the eventual storage replication of transaction log backups, change the
location to which SAP HANA writes the transaction log backups. You can make this change in HANA Studio.
Although SAP HANA backs up full log segments automatically, specify a log backup interval to be deterministic.
This is especially true when you use the disaster recovery option because you usually want to run log backups
with a deterministic period. In the following case, 15 minutes is set as the log backup interval.
You also can choose backups that are more frequent than every 15 minutes. A more frequent setting is often used
in conjunction with disaster recovery functionality of HANA Large Instances. Some customers perform transaction
log backups every 5 minutes.
If the database has never been backed up, the final step is to perform a file-based database backup to create a
single backup entry that must exist within the backup catalog. Otherwise, SAP HANA can't initiate your specified
log backups.
After your first successful storage snapshots run, delete the test snapshot that ran in step 6. For more information,
see "Remove test snapshots - removeTestStorageSnapshot" in Microsoft snapshot tools for SAP HANA on Azure.
Monitor the number and size of snapshots on the disk volume
On a specific storage volume, you can monitor the number of snapshots and the storage consumption of those
snapshots. The ls command doesn't show the snapshot directory or files. The Linux OS command du shows
details about those storage snapshots because they're stored on the same volumes. Use the command with the
following options:
du –sh .snapshot : This option provides a total of all the snapshots within the snapshot directory.
du –sh --max-depth=1 : This option lists all the snapshots that are saved in the .snapshot folder and the size of
each snapshot.
du –hc : This option provides the total size used by all the snapshots.
Use these commands to make sure that the snapshots that are taken and stored don't consume all the storage on
the volumes.
NOTE
The snapshots of the boot LUN aren't visible with the previous commands.
In the previous example, the snapshot label is dailyhana . The number of snapshots with this label to be kept is
28 . As you respond to disk space consumption, you might want to reduce the number of stored snapshots. An
easy way to reduce the number of snapshots to 15, for example, is to run the script with the last parameter set to
15 :
If you run the script with this setting, the number of snapshots, which includes the new storage snapshot, is 15.
The 15 most recent snapshots are kept, and the 15 older snapshots are deleted.
NOTE
This script reduces the number of snapshots only if there are snapshots more than one hour old. The script doesn't delete
snapshots that are less than one hour old. These restrictions are related to the optional disaster recovery functionality
offered.
If you no longer want to maintain a set of snapshots with the backup prefix dailyhana in the syntax examples, run
the script with 0 as the retention number. All snapshots that match that label are then removed. Removing all
snapshots can affect the capabilities of HANA Large Instances disaster recovery functionality.
A second option to delete specific snapshots is to use the script azure_hana_snapshot_delete . This script is
designed to delete a snapshot or set of snapshots either by using the HANA backup ID as found in HANA Studio or
through the snapshot name itself. Currently, the backup ID is only tied to the snapshots created for the hana
snapshot type. Snapshot backups of the type logs and boot don't perform an SAP HANA snapshot, so there's no
backup ID to be found for those snapshots. If the snapshot name is entered, it looks for all snapshots on the
different volumes that match the entered snapshot name.
For more information on the script, see "Delete a snapshot - azure_hana_snapshot_delete" in Microsoft snapshot
tools for SAP HANA on Azure.
Run the script as user root .
IMPORTANT
If there's data that exists only on the snapshot you plan to delete, after the snapshot is deleted, that data is lost forever.
NOTE
Single file restore doesn't work for snapshots of the boot LUN independent of the type of the HANA Large Instance units.
The .snapshot directory isn't exposed in the boot LUN.
3. Unmount the data volumes on each HANA database node. If the data volumes are still mounted to the
operating system, the restoration of the snapshot fails.
4. Open an Azure support request, and include instructions about the restoration of a specific snapshot:
During the restoration: SAP HANA on Azure Service might ask you to attend a conference call to
coordinate, verify, and confirm that the correct storage snapshot is restored.
After the restoration: SAP HANA on Azure Service notifies you when the storage snapshot is
restored.
5. After the restoration process is complete, remount all the data volumes.
Another possibility for getting, for example, SAP HANA data files recovered from a storage snapshot, is
documented in step 7 in Manual recovery guide for SAP HANA on Azure from a storage snapshot.
To restore from a snapshot backup, see Manual recovery guide for SAP HANA on Azure from a storage snapshot.
NOTE
If your snapshot was restored by Microsoft operations, you don't need to do step 7.
6. On the Basics tab, provide the following information for the ticket:
Issue type: Technical
Subscription: Your subscription
Ser vice: SAP HANA Large Instance
Resource: Your resource group
Summar y: Provide the user-generated public key
Problem type: Configuration and Setup
Problem subtype: Set up SnapCenter for HLI
7. In the Description of the support ticket, on the Details tab, provide:
Set up SnapCenter for HLI
Your public key for SnapCenter user (snapcenter.pem) - see the public key create example below
openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout snapcenter.key -out snapcenter.pem -subj
"/C=US/ST=WA/L=BEL/O=NetApp/CN=snapcenter"
Generating a 2048 bit RSA private key
.......................................................................................................
.........................................+++++
...............................+++++
writing new private key to 'snapcenter.key'
-----
sollabsjct31:~ # ls -l cl25*
-rw-r--r-- 1 root root 1704 Jul 22 09:59 snapcenter.key
-rw-r--r-- 1 root root 1253 Jul 22 09:59 snapcenter.pem
10. Attach the snapcenter.pem file to the support ticket and then select Create
Once the public key certificate is submitted, Microsoft sets up the SnapCenter username for your tenant
along with SVM IP address.
11. After you receive the SVM IP, set a password to access SVM, which you control.
The following is an example of the REST CALL (documentation) from HANA Large Instance or VM in virtual
network, which has access to HANA Large Instance environment and will be used to set the password.
IMPORTANT
Pay attention to the size of the VM, especially in larger environments.
3. Configure the user credentials for the SnapCenter. By default, it populates the Windows user credentials
used for installing the application.
4. When you start the session, save the security exemption and the GUI starts up.
5. Sign into SnapCenter on the VM (https://snapcenter-vm:8146) using the Windows credentials to configure
the environment.
Set up the storage system
1. In SnapCenter, select Storage System , and then select +New .
The default is one SVM per tenant. If a customer has multiple tenants or HLIs in multiple regions, the
recommendation is to configure all SVMs in SnapCenter
2. In Add Storage System, provide the information for the Storage System that you want to add, the
SnapCenter username and password, and then select Submit .
NOTE
The default is one SVM per tenant. If there are multiple tenants, then the recommendation is to configure all SVMs
here in SnapCenter.
3. In SnapCenter, select Hosts and the select +Add to set up the HANA plug-in and the HANA DB hosts. The
latest version of SnapCenter detects the HANA database on the host automatically.
5. Review the host details and select Submit to install the plug-in on the SnapCenter server.
6. After the plug-in is installed, in SnapCenter, select Hosts and then select +Add to add a HANA node.
9. On the HANA node, under the system database, select Security > Users > SNAPCENTER to create the
SnapCenter user.
Auto discovery
SnapCenter 4.3 enables the auto discovery function by default. Auto discovery is not supported for HANA
instances with HANA System Replication (HSR) configured. You must manually add the instance to the SnapCenter
server.
HANA setup (Manual)
If you configured HSR, you must configure the system manually.
1. In SnapCenter, select Resources and SAN HANA (at the top), and then select +Add SAP HANA
Database (on the right).
2. Specify the resource details of the HANA administrator user configured on the Linux host, or on the host
where the plug-ins are installed. The backup will be managed from the plug-in on the Linux system.
3. Select the data volume for which you need to take snapshots, select Save and then select Finish .
2. Follow the workflow of the configuration wizard to configure the snapshot scheduler.
3. Provide the options for configuring pre/post commands and special SSL keys. In this example, we're using
no special settings.
4. Select Add to create a snapshot policy, which can also be used for other HANA databases.
7. Configure the On demand backup retention settings . In our example, we're setting the retention to
three snapshot copies to keep.
9. If a SnapMirror setup is configured, select Update SnapMirror after creating a local SnapShot copy .
10. Select Finish to review the summary of the new backup policy.
11. Under Configure Schedule , select Add .
12. Select the Star t date , Expires on date, and the frequency.
Open-SmConnection
Disable-SmCollectionEms
su - h31adm
> sapcontrol -nr 00 -function StopSystem
StopSystem
OK
> sapcontrol -nr 00 -function GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GRAY, Stopped, , , 35902
unmount /hana/data/H31/mnt00001
3. Restore the database files via SnapCenter. Select the database and then select Restore .
4. Select the restore type. In our example, we're restore the complete resource.
NOTE
With a default setup, you don't need to specify commands to do a local restore from the on-disk snapshot.
TIP
If you want to restore a particular LUN inside the volume, select File Level.
1. Create a HANA database user store for the H34 database from /usr/sap/H34/HDB40.
hdbuserstore set H34KEY sollabsjct34:34013 system manager
zypper in java-1_8_0-openjdk
4. In SnapCenter, add the destination host on which the clone will be mounted. For more information, see
Adding hosts and installing plug-in packages on remote hosts.
a. Provide the information for the Run As Credentials you want to add.
b. Select the host operating system and enter the host information.
c. Under Plug-ins to install , select the version, enter the install path, and select SAP HANA .
d. Select Validate to run the pre-install checks.
5. Stop HANA and unmount the old data volume. You will mount the clone from SnapCenter.
6. Create the configuration and shell script files for the target.
mkdir /NetApp
chmod 777 /NetApp
cd NetApp
chmod 777 sc-system-refresh-H34.cfg
chmod 777 sc-system-refresh.sh
TIP
You can copy the scripts from SAP Cloning from SnapCenter.
vi sc-system-refresh-H34.cfg
HANA_ARCHITECTURE="MDC_single_tenant"
KEY="H34KEY"
TIME_OUT_START=18
TIME_OUT_STOP=18
INSTANCENO="40"
STORAGE="10.250.101.33"
8. Modify the shell script file.
vi sc-system-refresh.sh
VERBOSE=NO
MY_NAME=" basename $0 "
BASE_SCRIPT_DIR=" dirname $0 "
MOUNT_OPTIONS="rw,vers=4,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,nolock"
9. Start the clone from a backup process. Select the host to create the clone.
NOTE
For more information, see Cloning from a backup.
vi /etc/fstab
Delete a clone
You can delete a clone if it is no longer necessary. For more information, see Deleting clones.
The commands used to execute before clone deletion, are:
Pre clone delete: /NetApp/sc-system-refresh.sh shut down H34
Unmount: /NetApp/sc-system-refresh.sh umount H34
These commands allow SnapCenter to showdown the database, unmount the volume, and delete the fstab entry.
After that, the FlexClone is deleted.
Cloning database logfile
20190502025323###sollabsjct34###sc-system-refresh.sh: Adding entry in /etc/fstab.
20190502025323###sollabsjct34###sc-system-refresh.sh: 10.250.101.31:/Sc21186309-ee57-41a3-8584-8210297f791d
/hana/data/H34/mnt00001 nfs rw,vers=4,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,lock 0 0
20190502025323###sollabsjct34###sc-system-refresh.sh: Mounting data volume.
20190502025323###sollabsjct34###sc-system-refresh.sh: mount /hana/data/H34/mnt00001
20190502025323###sollabsjct34###sc-system-refresh.sh: Data volume mounted successfully.
20190502025323###sollabsjct34###sc-system-refresh.sh: chown -R h34adm:sapsys /hana/data/H34/mnt00001
20190502025333###sollabsjct34###sc-system-refresh.sh: Recover system database.
20190502025333###sollabsjct34###sc-system-refresh.sh: /usr/sap/H34/HDB40/exe/Python/bin/python
/usr/sap/H34/HDB40/exe/python_support/recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG"
[140278542735104, 0.005] >> starting recoverSys (at Thu May 2 02:53:33 2019)
[140278542735104, 0.005] args: ()
[140278542735104, 0.005] keys: {'command': 'RECOVER DATA USING SNAPSHOT CLEAR LOG'}
recoverSys started: ============2019-05-02 02:53:33 ============
testing master: sollabsjct34
sollabsjct34 is master
shutdown database, timeout is 120
stop system
stop system: sollabsjct34
stopping system: 2019-05-02 02:53:33
stopped system: 2019-05-02 02:53:33
creating file recoverInstance.sql
restart database
restart master nameserver: 2019-05-02 02:53:38
start system: sollabsjct34
2019-05-02T02:53:59-07:00 P010976 16a77f6c8a2 INFO RECOVERY state of service: nameserver,
sollabsjct34:34001, volume: 1, RecoveryPrepared
recoverSys finished successfully: 2019-05-02 02:54:00
[140278542735104, 26.490] 0
[140278542735104, 26.490] << ending recoverSys, rc = 0 (RC_TEST_OK), after 26.485 secs
20190502025400###sollabsjct34###sc-system-refresh.sh: Wait until SAP HANA database is started ....
20190502025400###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025410###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025420###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025430###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025440###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025451###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502025451###sollabsjct34###sc-system-refresh.sh: SAP HANA database is started.
20190502025451###sollabsjct34###sc-system-refresh.sh: Recover tenant database H34.
20190502025451###sollabsjct34###sc-system-refresh.sh: /usr/sap/H34/SYS/exe/hdb/hdbsql -U H34KEY RECOVER DATA
FOR H34 USING SNAPSHOT CLEAR LOG
0 rows affected (overall time 69.584135 sec; server time 69.582835 sec)
20190502025600###sollabsjct34###sc-system-refresh.sh: Checking availability of Indexserver for tenant H34.
20190502025601###sollabsjct34###sc-system-refresh.sh: Recovery of tenant database H34 succesfully finished.
20190502025601###sollabsjct34###sc-system-refresh.sh: Status: GREEN
Deleting the DB Clone – Logfile
20190502030312###sollabsjct34###sc-system-refresh.sh: Stopping HANA database.
20190502030312###sollabsjct34###sc-system-refresh.sh: sapcontrol -nr 40 -function StopSystem HDB
02.05.2019 03:03:12
StopSystem
OK
20190502030312###sollabsjct34###sc-system-refresh.sh: Wait until SAP HANA database is stopped ....
20190502030312###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502030322###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502030332###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502030342###sollabsjct34###sc-system-refresh.sh: Status: GRAY
20190502030342###sollabsjct34###sc-system-refresh.sh: SAP HANA database is stopped.
20190502030347###sollabsjct34###sc-system-refresh.sh: Unmounting data volume.
20190502030347###sollabsjct34###sc-system-refresh.sh: Junction path: Sc21186309-ee57-41a3-8584-8210297f791d
20190502030347###sollabsjct34###sc-system-refresh.sh: umount /hana/data/H34/mnt00001
20190502030347###sollabsjct34###sc-system-refresh.sh: Deleting /etc/fstab entry.
20190502030347###sollabsjct34###sc-system-refresh.sh: Data volume unmounted successfully.
NOTE
You may need to uninstall an older version of the plug-in manually.
cd /opt/NetApp/snapcenter/spl/installation/plugins
./uninstall
You can now install the latest HANA plug-in on the new node by selecting SUBMIT in SnapCenter.
Next steps
See Disaster recovery principles and preparation.
Disaster Recovery principles
12/22/2020 • 7 minutes to read • Edit Online
HANA Large Instances offer a disaster recovery functionality between HANA Large Instance stamps in different
Azure regions. For instance, if you deploy HANA Large Instance units in the US West region of Azure, you can use
the HANA Large Instance units in the US East region as disaster recovery units. As mentioned earlier, disaster
recovery is not configured automatically, because it requires you to pay for another HANA Large Instance unit in
the DR region. The disaster recovery setup works for scale-up as well as scale-out setups.
In the scenarios deployed so far, customers use the unit in the DR region to run non-production systems that use
an installed HANA instance. The HANA Large Instance unit needs to be of the same SKU as the SKU used for
production purposes. The following image shows what the disk configuration between the server unit in the Azure
production region and the disaster recovery region looks like:
As shown in this overview graphic, you then need to order a second set of disk volumes. The target disk volumes
are the same size as the production volumes for the production instance in the disaster recovery units. These disk
volumes are associated with the HANA Large Instance server unit in the disaster recovery site. The following
volumes are replicated from the production region to the DR site:
/hana/data
/hana/logbackups
/hana/shared (includes /usr/sap)
The /hana/log volume is not replicated because the SAP HANA transaction log is not needed in the way that the
restore from those volumes is done.
The basis of the disaster recovery functionality offered is the storage replication functionality offered by the HANA
Large Instance infrastructure. The functionality that is used on the storage side is not a constant stream of changes
that replicate in an asynchronous manner as changes happen to the storage volume. Instead, it is a mechanism that
relies on the fact that snapshots of these volumes are created on a regular basis. The delta between an already
replicated snapshot and a new snapshot that is not yet replicated is then transferred to the disaster recovery site
into target disk volumes. These snapshots are stored on the volumes and, if there is a disaster recovery failover,
need to be restored on those volumes.
The first transfer of the complete data of the volume should be before the amount of data becomes smaller than
the deltas between snapshots. As a result, the volumes in the DR site contain every one of the volume snapshots
performed in the production site. Eventually, you can use that DR system to get to an earlier status to recover lost
data, without rolling back the production system.
If there is an MCOD deployment with multiple independent SAP HANA instances on one HANA Large Instance unit,
it is expected that all SAP HANA instances are getting storage replicated to the DR side.
In cases where you use HANA System Replication as high-availability functionality in your production site, and use
storage-based replication for the DR site, the volumes of both the nodes from primary site to the DR instance are
replicated. You must purchase additional storage (same size as of primary node) at DR site to accommodate
replication from both primary and secondary to the DR.
NOTE
The HANA Large Instance storage replication functionality is mirroring and replicating storage snapshots. If you don't
perform storage snapshots as introduced in the Backup and restore section of this article, there can't be any replication to
the disaster recovery site. Storage snapshot execution is a prerequisite to storage replication to the disaster recovery site.
If the server instance has not already been ordered with the additional storage volume set, SAP HANA on Azure
Service Management attaches the additional set of volumes as a target for the production replica to the HANA
Large Instance unit on which you're running the TST HANA instance. For that purpose, you need to provide the SID
of your production HANA instance. After SAP HANA on Azure Service Management confirms the attachment of
those volumes, you need to mount those volumes to the HANA Large Instance unit.
The next step is for you to install the second SAP HANA instance on the HANA Large Instance unit in the DR Azure
region, where you run the TST HANA instance. The newly installed SAP HANA instance needs to have the same SID.
The users created need to have the same UID and Group ID that the production instance has. Read Backup and
restore for details. If the installation succeeded, you need to:
Execute step 2 of the storage snapshot preparation described in Backup and restore.
Create a public key for the DR unit of HANA Large Instance unit if you have not yet done so. See step 3 of the
storage snapshot preparation described in Backup and restore.
Maintain the HANABackupCustomerDetails.txt with the new HANA instance and test whether connectivity into
storage works correctly.
Stop the newly installed SAP HANA instance on the HANA Large Instance unit in the DR Azure region.
Unmount these PRD volumes and contact SAP HANA on Azure Service Management. The volumes can't stay
mounted to the unit because they can't be accessible while functioning as storage replication target.
The operations team establishes the replication relationship between the PRD volumes in the production Azure
region and the PRD volumes in the DR Azure region.
IMPORTANT
The /hana/log volume is not replicated because it is not necessary to restore the replicated SAP HANA database to a
consistent state in the disaster recovery site.
Next, set up, or adjust the storage snapshot backup schedule to get to your RTO and RPO in the disaster case. To
minimize the recovery point objective, set the following replication intervals in the HANA Large Instance service:
For the volumes covered by the combined snapshot (snapshot type hana ), set to replicate every 15 minutes to
the equivalent storage volume targets in the disaster recovery site.
For the transaction log backup volume (snapshot type logs ), set to replicate every 3 minutes to the equivalent
storage volume targets in the disaster recovery site.
To minimize the recovery point objective, set up the following:
Perform a hana type storage snapshot (see "Step 7: Perform snapshots") every 30 minutes to 1 hour.
Perform SAP HANA transaction log backups every 5 minutes.
Perform a logs type storage snapshot every 5-15 minutes. With this interval period, you achieve an RPO of
around 15-25 minutes.
With this setup, the sequence of transaction log backups, storage snapshots, and the replication of the HANA
transaction log backup volume and /hana/data, and /hana/shared (includes /usr/sap) might look like the data
shown in this graphic:
To achieve an even better RPO in the disaster recovery case, you can copy the HANA transaction log backups from
SAP HANA on Azure (Large Instances) to the other Azure region. To achieve this further RPO reduction, perform the
following steps:
1. Back up the HANA transaction log as frequently as possible to /hana/logbackups.
2. Use rsync to copy the transaction log backups to the NFS share-hosted Azure virtual machines. The VMs are in
Azure virtual networks in the Azure production region and in the DR regions. You need to connect both Azure
virtual networks to the circuit connecting the production HANA Large Instances to Azure. See the graphics in the
[Network considerations for disaster recovery with HANA Large Instances](#Network-considerations-for-
disaster recovery-with-HANA-Large-Instances) section.
3. Keep the transaction log backups in the region in the VM attached to the NFS exported storage.
4. In a disaster failover case, supplement the transaction log backups you find on the /hana/logbackups volume
with more recently taken transaction log backups on the NFS share in the disaster recovery site.
5. Start a transaction log backup to restore to the latest backup that might be saved over to the DR region.
When HANA Large Instance operations confirm the replication relationship setup and you start the execution
storage snapshot backups, the data replication begins.
As the replication progresses, the snapshots on the PRD volumes in the DR Azure regions are not restored. They
are only stored. If the volumes are mounted in such a state, they represent the state in which you unmounted those
volumes after the PRD SAP HANA instance was installed in the server unit in the DR Azure region. They also
represent the storage backups that are not yet restored.
If there is a failover, you also can choose to restore to an older storage snapshot instead of the latest storage
snapshot.
Next steps
Refer Disaster recovery failover procedure.
Disaster recovery failover procedure
12/22/2020 • 6 minutes to read • Edit Online
IMPORTANT
This article isn't a replacement for the SAP HANA administration documentation or SAP Notes. We expect that you have a
solid understanding of and expertise in SAP HANA administration and operations, especially for backup, restore, high
availability, and disaster recovery (DR). In this article, screenshots from SAP HANA Studio are shown. Content, structure, and
the nature of the screens of SAP administration tools and the tools themselves might change from SAP HANA release to
release.
There are two cases to consider when you fail over to a DR site:
You need the SAP HANA database to go back to the latest status of data. In this case, there's a self-service script
with which you can perform the failover without the need to contact Microsoft. For the failback, you need to
work with Microsoft.
You want to restore to a storage snapshot that's not the latest replicated snapshot. In this case, you need to work
with Microsoft.
NOTE
The following steps must be done on the HANA Large Instance unit, which represents the DR unit.
To restore to the latest replicated storage snapshots, follow the steps in "Perform full DR failover -
azure_hana_dr_failover" in Microsoft snapshot tools for SAP HANA on Azure.
If you want to have multiple SAP HANA instances failed over, run the azure_hana_dr_failover command several
times. When requested, enter the SAP HANA SID you want to fail over and restore.
You can test the DR failover also without impacting the actual replication relationship. To perform a test failover,
follow the steps in "Perform a test DR failover - azure_hana_test_dr_failover" in Microsoft snapshot tools for SAP
HANA on Azure.
IMPORTANT
Do not run any production transactions on the instance that you created in the DR site through the process of testing a
failover . The command azure_hana_test_dr_failover creates a set of volumes that have no relationship to the primary site. As
a result, synchronization back to the primary site is not possible.
If you want to have multiple SAP HANA instances to test, run the script several times. When requested, enter the
SAP HANA SID of the instance you want to test for failover.
NOTE
If you need to fail over to the DR site to rescue some data that was deleted hours ago and need the DR volumes to be set to
an earlier snapshot, this procedure applies.
1. Shut down the nonproduction instance of HANA on the disaster recovery unit of HANA Large Instances that
you're running. A dormant HANA production instance is preinstalled.
2. Make sure that no SAP HANA processes are running. Use the following command for this check:
/usr/sap/hostctrl/exe/sapcontrol –nr <HANA instance number> - function GetProcessList .
The output should show you the hdbdaemon process in a stopped state and no other HANA processes in a
running or started state.
3. Determine to which snapshot name or SAP HANA backup ID you want to have the disaster recovery site
restored. In real disaster recovery cases, this snapshot is usually the latest snapshot. If you need to recover
lost data, pick an earlier snapshot.
4. Contact Azure Support through a high-priority support request. Ask for the restore of that snapshot with the
name and date of the snapshot or the HANA backup ID on the DR site. The default is that the operations side
restores the /hana/data volume only. If you want to have the /hana/logbackups volumes too, you need to
specifically state that. Do not restore the /hana/shared volume. Instead, choose specific files like global.ini
out of the .snapshot directory and its subdirectories after you remount the /hana/shared volume for PRD.
On the operations side, the following steps occur:
a. The replication of snapshots from the production volume to the disaster recovery volumes is stopped.
This disruption might have already happened if an outage at the production site is the reason you need to
perform the disaster recovery procedure.
b. The storage snapshot name or snapshot with the backup ID you chose is restored on the disaster recovery
volumes.
c. After the restore, the disaster recovery volumes are available to be mounted to the HANA Large Instance
units in the disaster recovery region.
5. Mount the disaster recovery volumes to the HANA Large Instance unit in the disaster recovery site.
6. Start the dormant SAP HANA production instance.
7. If you chose to copy transaction log backup logs to reduce the RPO time, merge the transaction log backups
into the newly mounted DR /hana/logbackups directory. Don't overwrite existing backups. Copy newer
backups that weren't replicated with the latest replication of a storage snapshot.
8. You can also restore single files out of the snapshots that weren't replicated to the /hana/shared/PRD
volume in the DR Azure region.
The following steps show how to recover the SAP HANA production instance based on the restored storage
snapshot and the transaction log backups that are available.
1. Change the backup location to /hana/logbackups by using SAP HANA Studio.
2. SAP HANA scans through the backup file locations and suggests the most recent transaction log backup to
restore to. The scan can take a few minutes until a screen like the following appears:
3. Adjust some of the default settings:
Clear Use Delta Backups .
Select Initialize Log Area .
4. Select Finish .
A progress window, like the one shown here, should appear. Keep in mind that the example is of a disaster recovery
restore of a three-node scale-out SAP HANA configuration.
If the restore stops responding at the Finish screen and doesn't show the progress screen, confirm that all the SAP
HANA instances on the worker nodes are running. If necessary, start the SAP HANA instances manually.
Next steps
See Monitor and troubleshoot from HANA side.
How to monitor SAP HANA (large instances) on
Azure
12/22/2020 • 2 minutes to read • Edit Online
SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment — you need to monitor what
the OS and the application is doing and how the applications consume the following resources:
CPU
Memory
Network bandwidth
Disk space
With Azure Virtual Machines, you need to figure out whether the resource classes named above are sufficient or
they get depleted. Here is more detail on each of the different classes:
CPU resource consumption: The ratio that SAP defined for certain workload against HANA is enforced to make
sure that there should be enough CPU resources available to work through the data that is stored in memory.
Nevertheless, there might be cases where HANA consumes many CPUs executing queries due to missing indexes
or similar issues. This means you should monitor CPU resource consumption of the HANA large instance unit as
well as CPU resources consumed by the specific HANA services.
Memor y consumption: Is important to monitor from within HANA, as well as outside of HANA on the unit.
Within HANA, monitor how the data is consuming HANA allocated memory in order to stay within the required
sizing guidelines of SAP. You also want to monitor memory consumption on the Large Instance level to make sure
that additional installed non-HANA software does not consume too much memory, and therefore compete with
HANA for memory.
Network bandwidth: The Azure VNet gateway is limited in bandwidth of data moving into the Azure VNet, so it
is helpful to monitor the data received by all the Azure VMs within a VNet to figure out how close you are to the
limits of the Azure gateway SKU you selected. On the HANA Large Instance unit, it does make sense to monitor
incoming and outgoing network traffic as well, and to keep track of the volumes that are handled over time.
Disk space: Disk space consumption usually increases over time. Most common causes are: data volume
increases, execution of transaction log backups, storing trace files, and performing storage snapshots. Therefore, it
is important to monitor disk space usage and manage the disk space associated with the HANA Large Instance
unit.
For the Type II SKUs of the HANA Large Instances, the server comes with the preloaded system diagnostic tools.
You can utilize these diagnostic tools to perform the system health check. Run the following command to
generates the health check log file at /var/log/health_check.
/opt/sgi/health_check/microsoft_tdi.sh
When you work with the Microsoft Support team to troubleshoot an issue, you may also be asked to provide the
log files by using these diagnostic tools. You can zip the file using the following command.
Next steps
Refer How to monitor SAP HANA (large instances) on Azure.
Monitoring and troubleshooting from HANA side
12/22/2020 • 4 minutes to read • Edit Online
In order to effectively analyze problems related to SAP HANA on Azure (Large Instances), it is useful to narrow
down the root cause of a problem. SAP has published a large amount of documentation to help you.
Applicable FAQs related to SAP HANA performance can be found in the following SAP Notes:
SAP Note #2222200 – FAQ: SAP HANA Network
SAP Note #2100040 – FAQ: SAP HANA CPU
SAP Note #199997 – FAQ: SAP HANA Memory
SAP Note #200000 – FAQ: SAP HANA Performance Optimization
SAP Note #199930 – FAQ: SAP HANA I/O Analysis
SAP Note #2177064 – FAQ: SAP HANA Service Restart and Crashes
CPU
For an alert triggered due to improper threshold setting, a resolution is to reset to the default value or a more
reasonable threshold value.
The following alerts may indicate CPU resource problems:
Host CPU Usage (Alert 5)
Most recent savepoint operation (Alert 28)
Savepoint duration (Alert 54)
You may notice high CPU consumption on your SAP HANA database from one of the following:
Alert 5 (Host CPU usage) is raised for current or past CPU usage
The displayed CPU usage on the overview screen
The Load graph might show high CPU consumption, or high consumption in the past:
An alert triggered due to high CPU utilization could be caused by several reasons, including, but not limited to:
execution of certain transactions, data loading, jobs that are not responding, long running SQL statements, and
bad query performance (for example, with BW on HANA cubes).
Refer to the SAP HANA Troubleshooting: CPU Related Causes and Solutions site for detailed troubleshooting steps.
Operating System
One of the most important checks for SAP HANA on Linux is to make sure that Transparent Huge Pages are
disabled, see SAP Note #2131662 – Transparent Huge Pages (THP) on SAP HANA Servers.
You can check if Transparent Huge Pages are enabled through the following Linux command: cat
/sys/kernel/mm/transparent_hugepage/enabled
If always is enclosed in brackets as below, it means that the Transparent Huge Pages are enabled: [always]
madvise never; if never is enclosed in brackets as below, it means that the Transparent Huge Pages are disabled:
always madvise [never]
The following Linux command should return nothing: rpm -qa | grep ulimit. If it appears ulimit is installed,
uninstall it immediately.
Memory
You may observe that the amount of memory allocated by the SAP HANA database is higher than expected. The
following alerts indicate issues with high memory usage:
Host physical memory usage (Alert 1)
Memory usage of name server (Alert 12)
Total memory usage of Column Store tables (Alert 40)
Memory usage of services (Alert 43)
Memory usage of main storage of Column Store tables (Alert 45)
Runtime dump files (Alert 46)
Refer to the SAP HANA Troubleshooting: Memory Problems site for detailed troubleshooting steps.
Network
Refer to SAP Note #2081065 – Troubleshooting SAP HANA Network and perform the network troubleshooting
steps in this SAP Note.
1. Analyzing round-trip time between server and client. A. Run the SQL script HANA_Network_Clients.
2. Analyze internode communication. A. Run SQL script HANA_Network_Services.
3. Run Linux command ifconfig (the output shows if any packet losses are occurring).
4. Run Linux command tcpdump .
Also, use the open source IPERF tool (or similar) to measure real application network performance.
Refer to the SAP HANA Troubleshooting: Networking Performance and Connectivity Problems site for detailed
troubleshooting steps.
Storage
From an end-user perspective, an application (or the system as a whole) runs sluggishly, is unresponsive, or can
even seem to stop responding if there are issues with I/O performance. In the Volumes tab in SAP HANA Studio,
you can see the attached volumes, and what volumes are used by each service.
Attached volumes in the lower part of the screen you can see details of the volumes, such as files and I/O statistics.
Refer to the SAP HANA Troubleshooting: I/O Related Root Causes and Solutions and SAP HANA Troubleshooting:
Disk Related Root Causes and Solutions site for detailed troubleshooting steps.
Diagnostic Tools
Perform an SAP HANA Health Check through HANA_Configuration_Minichecks. This tool returns potentially critical
technical issues that should have already been raised as alerts in SAP HANA Studio.
Refer to SAP Note #1969700 – SQL statement collection for SAP HANA and download the SQL Statements.zip file
attached to that note. Store this .zip file on the local hard drive.
In SAP HANA Studio, on the System Information tab, right-click in the Name column and select Impor t SQL
Statements .
Select the SQL Statements.zip file stored locally, and a folder with the corresponding SQL statements will be
imported. At this point, the many different diagnostic checks can be run with these SQL statements.
For example, to test SAP HANA System Replication bandwidth requirements, right-click the Bandwidth statement
under Replication: Bandwidth and select Open in SQL Console.
The complete SQL statement opens allowing input parameters (modification section) to be changed and then
executed.
Another example is right-clicking on the statements under Replication: Over view . Select Execute from the
context menu:
HANA_Ser vices_Over view for an overview of what SAP HANA services are currently running.
HANA_Ser vices_Statistics for SAP HANA service information (CPU, memory, etc.).
HANA_Configuration_Over view_Rev110+ for general information on the SAP HANA instance.
Next steps
Refer High availability set up in SUSE using the STONITH.
Azure HANA Large Instances control through Azure
portal
12/22/2020 • 10 minutes to read • Edit Online
NOTE
For Rev 4.2, follow the instructions in the Manage BareMetal Instances through the Azure portal topic.
This document covers the way how HANA Large Instances are presented in Azure portal and what activities can be
conducted through Azure portal with HANA Large Instance units that are deployed for you. Visibility of HANA
Large Instances in Azure portal is provided through an Azure resource provider for HANA Large Instances, which
currently is in public preview
For more information, see the article Azure resource providers and types
Register through Azure portal
You can (re-)register the HANA Large Instance Resource Provider through Azure portal. You need to list your
subscription in Azure portal and double-click on the subscription, which was used to deploy your HANA Large
Instance unit(s). One you are in the overview page of your subscription, select "Resource providers" as shown
below and type "HANA" into the search window.
In the screenshot shown, the resource provider was already registered. In case the resource provider is not yet
registered, press "re-register" or "register".
For more information, see the article Azure resource providers and types
In the list of resource groups, you are getting listed, you might need to filter on the subscription you used to have
HANA Large Instances deployed
After filtering to the correct subscription, you still may have a long list of resource groups. Look for one with a post-
fix of -Txxx where "xxx" are three digits, like -T050 .
As you found the resource group, list the details of it. The list you received could look like:
All the units listed are representing a single HANA Large Instance unit that has been deployed in your subscription.
In this case, you look at eight different HANA Large Instance units, which were deployed in your subscription.
If you deployed several HANA Large Instance tenants under the same Azure subscription, you will find multiple
Azure resource groups
Looking at the different attributes shown, those attributes look hardly different than Azure VM attributes. On the
left-hand side header, it shows the Resource group, Azure region, subscription name, and ID as well as some tags
that you added. By default, the HANA Large Instance units have no tag assigned. On the right-hand side of the
header, the name of the unit is listed as assigned when the deployment was done. The operating system is shown
as well as the IP address. As with VMs the HANA Large instance unit type with the number of CPU threads and
memory is shown as well. More details on the different HANA Large Instance units, are shown here:
Available SKUs for HLI
SAP HANA (Large Instances) storage architecture
Additional data on the right lower side is the revision of the HANA Large Instance stamp. Possible values are:
Revision 3
Revision 4
Revision 4 is the latest architecture released of HANA Large Instances with major improvements in network latency
between Azure VMs and HANA Large instance units deployed in Revision 4 stamps or rows. Another very
important information is found in the lower right corner of the overview with the name of the Azure Proximity
Placement Group that is automatically created for each deployed HANA Large Instance unit. This Proximity
Placement Group needs to be referenced when deploying the Azure VMs that host the SAP application layer. By
using the Azure proximity placement group associated with the HANA Large Instance unit, you make sure that the
Azure VMs are deployed in close proximity to the HANA Large Instance unit. The way how proximity placement
groups can be used to locate the SAP application layer in the same Azure datacenter as Revision 4 hosted HANA
Large Instance units is described in Azure Proximity Placement Groups for optimal network latency with SAP
applications.
An additional field in the right column of the header informs about the power state of the HANA Large instance
unit.
NOTE
The power state describes whether the hardware unit is powered on or off. It does not give information about the operating
system being up and running. As you restart a HANA Large Instance unit, you will experience a small time where the state of
the unit changes to Star ting to move into the state of Star ted . Being in the state of Star ted means that the OS is starting
up or that the OS has been started up completely. As a result, after a restart of the unit, you can't expect to immediately log
into the unit as soon as the state switches to Star ted .
If you press 'See more', additional information is shown. One additional information is displaying the revision of
the HANA Large Instance stamp, the unit got deployed in. See the article What is SAP HANA on Azure (Large
Instances) for the different revisions of HANA Large Instance stamps
One of the main activities recorded are restarts of a unit. The data listed includes the status of the activity, the time
stamp the activity got triggered, the subscription ID out of which the activity got triggered and the Azure user who
triggered the activity.
Another activity that is getting recorded are changes to the unit in the Azure meta data. Besides the restart initiated,
you can see the activity of Write HANAInstances . This type of activity performs no changes on the HANA Large
Instance unit itself, but is documenting changes to the meta data of the unit in Azure. In the case listed, we added
and deleted a tag (see next section).
The first few data items, you saw in the overview screen already. But an important portion of data is the
ExpressRoute Circuit ID, which you got as the first deployed units were handed over. In some support cases, you
might get asked for that data. An important data entry is shown at the bottom of the screenshot. The data
displayed is the IP address of the NFS storage head that isolates your storage to your tenant in the HANA Large
Instance stack. This IP address is also needed when you edit the configuration file for storage snapshot backups.
As you scroll down in the property pane, you get additional data like a unique resource ID for your HANA Large
Instance unit, or the subscription ID which was assigned to the deployment.
As you are pressing the restart button, you are asked whether you really want to restart the unit. As you confirm by
pressing the button "Yes", the unit will restart.
NOTE
In the restart process, you will experience a small time where the state of the unit changes to Star ting to move into the
state of Star ted . Being in the state of Star ted means that the OS is starting up or that the OS has been started up
completely. As a result, after a restart of the unit, you can't expect to immediately log into the unit as soon as the state
switches to Star ted .
IMPORTANT
Dependent on the amount of memory in your HANA Large Instance unit, a restart and reboot of the hardware and the
operating system can take up to one hour
In order to get the service of SAP HANA Large Instances listed in the next screen, you might need to select 'All
Services" as shown below
In the list of services, you can find the service SAP HANA Large Instance . As you choose that service, you can
select specific problem types as shown:
Under each of the different problem types, you are offered a selection of problem subtypes you need to select to
characterize your problem further. After selecting the subtype, you now can name the subject. Once you are done
with the selection process, you can move to next step of the creation. In the Solutions section, you are pointed to
documentation around HANA Large Instances, which might give a pointer to a solution of your problem. If you
can't find a solution for your problem in the documentation suggested, you go to the next step. In the next step, you
are going to be asked whether the issue is with VMs or with HANA Large Instance units. This information helps to
direct the support request to the correct specialists.
As you answered the questions and provided additional details, you can go the next step in order to review the
support request and the submit it.
Next steps
How to monitor SAP HANA (large instances) on Azure
Monitoring and troubleshooting from HANA side
Manage BareMetal Instances through the Azure
portal
12/22/2020 • 6 minutes to read • Edit Online
This article shows how the Azure portal displays BareMetal Instances. This article also shows you the activities you
can do in the Azure portal with your deployed BareMetal Instance units.
For more information, see the article Azure resource providers and types.
Azure portal
You can register the BareMetalInfrastructure resource provider through the Azure portal.
You'll need to list your subscription in the Azure portal and then double-click on the subscription used to deploy
your BareMetal Instance units.
1. Sign in to the Azure portal.
2. On the Azure portal menu, select All ser vices .
3. In the All ser vices box, enter subscription , and then select Subscriptions .
4. Select the subscription from the subscription list to view.
5. Select Resource providers and enter BareMetalInfrastructure into the search. The resource provider
should be Registered , as the image shows.
NOTE
If the resource provider is not registered, select Register .
BareMetal Instance units in the Azure portal
When you submit a BareMetal Instance deployment request, you'll specify the Azure subscription that you're
connecting to the BareMetal Instances. Use the same subscription you use to deploy the application layer that
works against the BareMetal Instance units.
During the deployment of your BareMetal Instances, a new Azure resource group gets created in the Azure
subscription you used in the deployment request. This new resource group lists all your BareMetal Instance units
you've deployed in the specific subscription.
1. In the BareMetal subscription, in the Azure portal, select Resource groups .
2. In the list, locate the new resource group.
TIP
You can filter on the subscription you used to deploy the BareMetal Instance. After you filter to the proper
subscription, you might have a long list of resource groups. Look for one with a post-fix of -Txxx where xxx is three
digits like -T250 .
3. Select the new resource group to show the details of it. The image shows one BareMetal Instance unit
deployed.
NOTE
If you deployed several BareMetal Instance tenants under the same Azure subscription, you would see multiple Azure
resource groups.
View the attributes of a single instance
You can view the details of a single unit. In the list of the BareMetal instance, select the single instance you want to
view.
The attributes in the image don't look much different than the Azure virtual machine (VM) attributes. On the left,
you'll see the Resource group, Azure region, and subscription name and ID. If you assigned tags, then you'll see
them here as well. By default, the BareMetal Instance units don't have tags assigned.
On the right, you'll see the unit's name, operating system (OS), IP address, and SKU that shows the number of CPU
threads and memory. You'll also see the power state and hardware version (revision of the BareMetal Instance
stamp). The power state indicates if the hardware unit is powered on or off. The operating system details, however,
don't indicate whether it's up and running.
The possible hardware revisions are:
Revision 3
Revision 4
Revision 4.2
NOTE
Revision 4.2 is the latest rebranded BareMetal Infrastructure using the Revision 4 architecture. It has significant
improvements in network latency between Azure VMs and BareMetal instance units deployed in Revision 4 stamps or rows.
Also, on the right side, you'll find the Azure Proximity Placement Group's name, which is created automatically for
each deployed BareMetal Instance unit. Reference the Proximity Placement Group when you deploy the Azure VMs
that host the application layer. When you use the Proximity Placement Group associated with the BareMetal
Instance unit, you ensure that the Azure VMs get deployed close to the BareMetal Instance unit.
TIP
To locate the application layer in the same Azure datacenter as Revision 4.x, see Azure proximity placement groups for
optimal network latency.
When you restart a BareMetal Instance unit, you'll experience a delay. During this delay, the power state moves
from Star ting to Star ted , which means the OS has started up completely. As a result, after a restart, you can't log
into the unit as soon as the state switches to Star ted .
IMPORTANT
Depending on the amount of memory in your BareMetal Instance unit, a restart and a reboot of the hardware and the
operating system can take up to one hour.
Next steps
If you want to learn more about BareMetal, see BareMetal workload types.
High availability set up in SUSE using the STONITH
12/22/2020 • 11 minutes to read • Edit Online
This document provides the detailed step by step instructions to set up the High Availability on SUSE Operating
system using the STONITH device.
Disclaimer : This guide is derived by testing the setup in the Microsoft HANA Large Instances environment, which
successfully works. As Microsoft Service Management team for HANA Large Instances does not support Operating
system, you may need to contact SUSE for any further troubleshooting or clarification on the operating system
layer. Microsoft service management team does set up STONITH device and fully supports and can be involved for
troubleshooting for STONITH device issues.
Overview
To set up the High availability using SUSE clustering, the following pre-requisites must meet.
Pre -requisites
HANA Large Instances are provisioned
Operating system is registered
HANA Large Instances servers are connected to SMT server to get patches/packages
Operating system have latest patches installed
NTP (time server) is set up
Read and understand the latest version of SUSE documentation on HA setup
Setup details
This guide uses the following setup:
Operating System: SLES 12 SP1 for SAP
HANA Large Instances: 2xS192 (four sockets, 2 TB)
HANA Version: HANA 2.0 SP1
Server Names: sapprdhdb95 (node1) and sapprdhdb96 (node2)
STONITH Device: iSCSI based STONITH device
NTP set up on one of the HANA Large Instance nodes
When you set up HANA Large Instances with HSR, you can request Microsoft Service Management team to set up
STONITH. If you are already an existing customer who has HANA Large Instances provisioned, and need STONITH
device set up for your existing blades, you need to provide the following information to Microsoft Service
Management team in the service request form (SRF). You can request SRF form through the Technical Account
Manager or your Microsoft Contact for HANA Large Instance onboarding. The new customers can request
STONITH device at the time of provisioning. The inputs are available in the provisioning request form.
Server Name and Server IP address (for example, myhanaserver1, 10.35.0.1)
Location (for example, US East)
Customer Name (for example, Microsoft)
SID - HANA System Identifier (for example, H11)
Once the STONITH device is configured, Microsoft Service Management team does provide you the SBD device
name and IP address of the iSCSI storage, which you can use to configure STONITH setup.
To set up the end to end HA using STONITH, the following steps needs to be followed:
1. Identify the SBD device
2. Initialize the SBD device
3. Configuring the Cluster
4. Setting Up the Softdog Watchdog
5. Join the node to the cluster
6. Validate the cluster
7. Configure the resources to the cluster
8. Test the failover process
iqn.1996-04.de.suse:01:<Tenant><Location><SID><NodeNumber>
Microsoft service management does provide this string. Modify the file on both the nodes, however the node
number is different on each node.
1.4 Execute the command to log in to the iSCSI device, it shows four sessions. Run it on both the nodes.
iscsiadm -m node -l
1.5 Execute the rescan script: rescan-scsi-bus.sh. This script shows you the new disks created for you. Run it on
both the nodes. You should see a LUN number that is greater than zero (for example: 1, 2 etc.)
rescan-scsi-bus.sh
1.6 To get the device name run the command fdisk –l. Run it on both the nodes. Pick the device with the size of 178
MiB .
fdisk –l
2.2 Check what has been written to the device. Do it on both the nodes
Click Next
Click Next
In the default option, Booting was off, change it to “on” so pacemaker is started on boot. You can make the choice
based on your setup requirements. Click Next and the cluster configuration is complete.
modprobe softdog
modprobe softdog
4.4 Check and ensure that softdog is running as following on both the nodes:
/usr/share/sbd/sbd.sh start
4.6 Test the SBD daemon on both the nodes. You see two entries after you configure it on both the nodes
4.8 On the Second node (node2) you can check the message status
4.9 To adopt the sbd config, update the file /etc/sysconfig/sbd as following. Update the file on both the nodes
SBD_DEVICE=" <SBD Device Name>"
SBD_WATCHDOG="yes"
SBD_PACEMAKER="yes"
SBD_STARTMODE="clean"
SBD_OPTS=""
ha-cluster-join
If you receive an error during joining the cluster, refer Scenario 6: Node 2 unable to join the cluster.
crm_mon
You can also log in to hawk to check the cluster status https://<node IP>:7630. The default user is hacluster and the
password is linux. If needed, you can change the password using passwd command.
sapprdhdb95:~ # vi crm-bs.txt
# enter the following to crm-bs.txt
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
Add the configuration to the cluster.
# vi crm-sbd.txt
# enter the following to crm-sbd.txt
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15"
# vi crm-vip.txt
primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
operations $id="rsc_ip_HA1_HDB10-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.35.0.197"
Now, stop the pacemaker service on node2 and resources failed over to node1
Before failover
After failover
9. Troubleshooting
This section describes the few failure scenarios, which can be encountered during the setup. You may not
necessarily face these issues.
Scenario 1: Cluster node not online
If any of the nodes does not show online in cluster manager, you can try following to bring it online.
Start the iSCSI service
iscsiadm -m node -l
Expected Output
If the yast2 does not open with the graphical view, follow the steps following.
Install the required packages. You must be logged in as user “root” and have SMT set up to download/install the
packages.
To install the packages, use yast>Software>Software Management>Dependencies> option “Install recommended
packages…”. The following screenshot illustrates the expected screens.
NOTE
You need to perform the steps on both the nodes, so that you can access the yast2 graphical view from both the nodes.
Click Next
Click Finish
You also need to install the libqt4 and libyui-qt packages.
zypper -n install libqt4
Yast2 should be able to open the graphical view now as shown here.
Click Continue
Click Next when the installation is complete
To fix it, delete the following line from the file /usr/lib/systemd/system/fstrim.timer
Persistent=true
After the preceding fix, node2 should get added to the cluster
This document describes the steps to perform an operating system file level backup and restore for the Type II
SKUs of the HANA Large Instances of Revision 3.
IMPORTANT
This ar ticle does not apply to Type II SKU deployments in Revision 4 HANA Large Instance stamps. Boot
LUNS of Type II HANA Large Instance units which are deployed in Revision 4 HANA Large Instance stamps can be backed up
with storage snapshots as this is the case with Type I SKUs already in Revision 3 stamps
NOTE
The OS backup scripts uses the ReaR software, which is pre-installed in the server.
After the provisioning is complete by the Microsoft Service Management team, by default, the server is configured
with two backup schedules to back up the file system level back of the operating system. You can check the
schedules of the backup jobs by using the following command:
#crontab –l
You can change the backup schedule anytime using the following command:
#crontab -e
#rear -v mkbackup
After the restore, the file is recovered in the current working directory.
The following command shows the restore of a file /etc/fstabfrom the backup file backup.tar.gz
NOTE
You need to copy the file to desired location after it is restored from the backup.
OUTPUT=ISO
ISO_MKISOFS_BIN=/usr/bin/ebiso
BACKUP=NETFS
OUTPUT_URL="nfs://nfsip/nfspath/"
BACKUP_URL="nfs://nfsip/nfspath/"
BACKUP_OPTIONS="nfsvers=4,nolock"
NETFS_KEEP_OLD_BACKUP_COPY=
EXCLUDE_VG=( vgHANA-data-HC2 vgHANA-data-HC3 vgHANA-log-HC2 vgHANA-log-HC3 vgHANA-shared-HC2 vgHANA-shared-HC3
)
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/media' '/var/tmp/*' '/var/crash' '/hana' '/usr/sap'
‘/proc’)
Configuring and enabling kdump is a step that is needed to troubleshoot system crashes that do not have a clear
cause. There are times when a system will unexpectedly crash that cannot be explained by a hardware or
infrastructure problem. In these cases it can be an operating system or application problem and kdump will allow
SUSE to determine why a system crashed.
Supported SKUs
H A N A L A RGE IN STA N C E
TYPE O S VEN DO R O S PA C K A GE VERSIO N SK U
Prerequisites
Kdump service uses /var/crash directory to write dumps, make sure the partition corresponds to this directory
has sufficient space to accommodate dumps.
Setup details
Script to enable Kdump can be found here
NOTE
this script is made based on our lab setup and Customer is expected to contact OS vendor for any further tuning. Separate
LUN is going to be provisioned for the new and existing servers for saving the dumps and script will take care of configuring
the file system out of the LUN. Microsoft will not be responsible for analyzing the dump. Customer has to open a ticket with
OS vendor to get it analyzed.
Run this script on HANA Large Instance using the below command
NOTE
sudo privilege needed to run this command.
If the command outputs Kdump is successfully enabled, please make sure to reboot the system to apply the
changes successfully.
If the command output is Failed to do certain operation, Exiting!!!!, then Kdump service is not enabled. Refer
to section Support issue.
Test Kdump
NOTE
Below operation will trigger a kernel crash and system reboot.
After the system reboots successfully, check the /var/crash directory for kernel crash logs.
If the /var/crash has directory with current date, then the Kdump is successfully enabled.
Support issue
If the script fails with an error or Kdump isn't enabled, raise service request with Microsoft support team with
following details.
HLI subscription ID
Server name
OS vendor
OS version
Kernel version
Related Documents
To know more on configuring the kdump
Operating System Upgrade
12/22/2020 • 4 minutes to read • Edit Online
This document describes the details on operating system upgrades on the HANA Large Instances.
NOTE
The OS upgrade is customer's responsibility, Microsoft operations support can guide you to the key areas to watch out
during the upgrade. You should consult your operating system vendor as well before you plan for an upgrade.
NOTE
This article contains references to the term blacklist, a term that Microsoft no longer uses. When the term is removed from
the software, we'll remove it from this article.
During HLI unit provisioning, the Microsoft operations team installs the operating system. Over the time, you are
required to maintain the operating system (Example: Patching, tuning, upgrading etc.) on the HLI unit.
Before you do major changes to the operating system (for example, Upgrade SP1 to SP2), you shall contact
Microsoft Operations team by opening a support ticket to consult.
Include in your ticket:
Your HLI subscription ID.
Your server name.
The patch level you are planning to apply.
The date you are planning this change.
We would recommend you open this ticket at least one week prior to the desirable upgrade, which will let opration
team know about the desired firmware version.
For the support matrix of the different SAP HANA versions with the different Linux versions, see SAP Note
#2235581.
Known issues
The following are the few common known issues during the upgrade:
On SKU Type II class SKU, the software foundation software (SFS) is removed after the OS upgrade. You need to
reinstall the compatible SFS after the OS upgrade.
Ethernet card drivers (ENIC and FNIC) rolled back to older version. You need to reinstall the compatible version
of the drivers after the upgrade.
O S PA C K A GE
O S VEN DO R VERSIO N F IRM WA RE VERSIO N EN IC DRIVER F N IC DRIVER
rpm -e <old-rpm-package>
modinfo enic
modinfo fnic
NOTE
LUN ID varies from server to server.
Disable EDAC
The Error Detection And Correction (EDAC) module helps in detecting and correcting memory errors. However, the
underlying hardware for SAP HANA on Azure Large Instances (Type I) is already performing the same function.
Having the same feature enabled at the hardware and operating system (OS) levels can cause conflicts and can
lead to occasional, unplanned shutdowns of the server. Therefore, it is recommended to disable the module from
the OS.
Execution Steps
Check if EDAC module is enabled. If an output is returned in below command, that means the module is
enabled.
Disable the modules by appending the following lines to the file /etc/modprobe.d/blacklist.conf
blacklist sb_edac
blacklist edac_core
A reboot is required to take changes in place. Execute lsmod command and verify the module is not present there
in output.
Kernel parameters
Make sure the correct setting for transparent_hugepage , numa_balancing , processor.max_cstate , ignore_ce and
intel_idle.max_cstate are applied.
intel_idle.max_cstate=1
processor.max_cstate=1
transparent_hugepage=never
numa_balancing=disable
mce=ignore_ce
Execution Steps
Add these parameters to the GRB_CMDLINE_LINUX line in the file /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot system.
Next steps
Refer Backup and restore for OS backup Type I SKU class.
Refer OS Backup for Type II SKUs of Revision 3 stamps for Type II SKU class.
Set up SMT server for SUSE Linux
12/22/2020 • 5 minutes to read • Edit Online
Large Instances of SAP HANA don't have direct connectivity to the internet. It's not a straightforward process to
register such a unit with the operating system provider, and to download and apply updates. A solution for SUSE
Linux is to set up an SMT server in an Azure virtual machine. Host the virtual machine in an Azure virtual network,
which is connected to the HANA Large Instance. With such an SMT server, the HANA Large Instance unit could
register and download updates.
For more documentation on SUSE, see their Subscription Management Tool for SLES 12 SP2.
Prerequisites for installing an SMT server that fulfills the task for HANA Large Instances are:
An Azure virtual network that is connected to the HANA Large Instance ExpressRoute circuit.
A SUSE account that is associated with an organization. The organization should have a valid SUSE subscription.
The deployed virtual machine is smaller, and got the internal IP address in the Azure virtual network of 10.34.1.4.
The name of the virtual machine is smtserver. After the installation, the connectivity to the HANA Large Instance
unit or units is checked. Depending on how you organized name resolution, you might need to configure
resolution of the HANA Large Instance units in etc/hosts of the Azure virtual machine.
Add a disk to the virtual machine. You use this disk to hold the updates, and the boot disk itself could be too small.
Here, the disk got mounted to /srv/www/htdocs, as shown in the following screenshot. A 100-GB disk should
suffice.
Sign in to the HANA Large Instance unit or units, maintain /etc/hosts, and check whether you can reach the Azure
virtual machine that is supposed to run the SMT server over the network.
After this check, sign in to the Azure virtual machine that should run the SMT server. If you are using putty to sign
in to the virtual machine, run this sequence of commands in your bash window:
cd ~
echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc
After the virtual machine is connected to the SUSE site, install the smt packages. Use the following putty command
to install the smt packages.
You can also use the YAST tool to install the smt packages. In YAST, go to Software Maintenance , and search for
smt. Select smt , which switches automatically to yast2-smt.
Accept the selection for installation on the smtserver. After the installation completes, go to the SMT server
configuration. Enter the organizational credentials from the SUSE Customer Center you retrieved earlier. Also enter
your Azure virtual machine hostname as the SMT Server URL. In this demonstration, it's https://smtserver.
Now test whether the connection to the SUSE Customer Center works. As you see in the following screenshot, in
this demonstration case, it did work.
After the SMT setup starts, provide a database password. Because it's a new installation, you should define that
password as shown in the following screenshot.
rcsmt restart
systemctl restart smt.service
systemctl restart apache2
Next, start the initial copy of the select packages to the SMT server you set up. This copy is triggered in the shell by
using the command smt-mirror.
The packages should get copied into the directories created under the mount point /srv/www/htdocs. This process
can take an hour or more, depending on how many packages you select. As this process finishes, move to the SMT
client setup.
It's possible that the load of the certificate from the server by the client succeeds, but the registration fails, as
shown in the following screenshot.
If the registration fails, see SUSE support document, and run the steps described there.
IMPORTANT
For the server name, provide the name of the virtual machine (in this case, smtserver), without the fully qualified domain
name.
After running these steps, run the following command on the HANA Large Instance unit:
SUSEConnect –cleanup
NOTE
Wait a few minutes after that step. If you run clientSetup4SMT.sh immediately, you might get an error.
If you encounter a problem that you need to fix based on the steps of the SUSE article, restart clientSetup4SMT.sh
on the HANA Large Instance unit. Now it should finish successfully.
You configured the SMT client of the HANA Large Instance unit to connect to the SMT server you installed in the
Azure virtual machine. You now can take 'zypper up' or 'zypper in' to install operating system updates to HANA
Large Instances, or install additional packages. You can only get updates that you downloaded before on the SMT
server.
Next steps
HANA Installation on HLI.
SAP HANA on Azure Large Instance migration to
Azure Virtual Machines
12/22/2020 • 16 minutes to read • Edit Online
This article describes possible Azure Large Instance deployment scenarios and offers planning and migration
approach with minimized transition downtime
Overview
Since the announcement of the Azure Large Instances for SAP HANA (HLI) in September 2016, many customers
have adopted this hardware as a service offering for their in-memory compute platform. In recent years, the Azure
VM size extension coupled with the support of HANA scale-out deployment has exceeded most enterprise
customers’ ERP database capacity demand. We begin to see customers expressing the interest to migrate their SAP
HANA workload from physical servers to Azure VMs. This guide isn't a step-by-step configuration document. It
describes the common deployment models and offers planning and migration advises. The intent is to call out
necessary considerations for preparation to minimize transition downtime.
Assumptions
This article makes the following assumptions:
The only interest considered is a homogenous HANA database compute service migration from Hana Large
Instance (HLI) to Azure VM without significant software upgrade or patching. These minor updates include the
use of a more recent OS version or HANA version explicitly stated as supported by relevant SAP notes.
All updates/upgrades activities need to be done before or after the migration. For example, SAP HANA MCOS
converting to MDC deployment.
The migration approach that would offer the least downtime is SAP HANA System Replication. Other migration
methods aren't part of the scope of this document.
This guidance is applicable for both Rev3 and Rev4 SKUs of HLI.
HANA deployment architecture remains primarily unchanged during the migration. That is, a system with single
instance DR will stay the same way at the destination.
Customers have reviewed and understood the Service Level Agreement (SLA) of the target (to-be) architecture.
Commercial terms between HLIs and VMs are different. Customers should monitor the usage of their VMs for
cost management.
Customers understand that HLI is a dedicated compute platform while VMs run on shared yet isolated
infrastructure.
Customers have validated that target VMs support your intended architecture. To see all the supported VM
SKUs certified for SAP HANA deployment, see the SAP HANA hardware directory.
Customers have validated the design and migration plan.
Plan for disaster recovery VM along with the primary site. Customers can't use the HLI as the DR node for the
primary site running on VMs after the migration.
Customers copied the required backup files to target VMs, based on business recoverability and compliance
requirements. With VM accessible backups, it allows for point-in-time recovery during the transition period.
For HSR HA, customers need to set up and configure the STONITH device per SAP HANA HA guides for SLES
and RHEL. It’s not preconfigured like the HLI case.
This migration approach doesn't cover the HLI SKUs with Optane configuration.
Deployment scenarios
Common deployment models with HLI customers are summarized in the following table. Migration to Azure VMs
for all HLI scenarios is possible. To benefit from complementary Azure services available, minor architectural
changes may be required.
7 Host auto failover (1+1) Yes Use ANF for shared storage
with Azure VMs
Destination planning
Standing up a new infrastructure to take the place of an existing one deserves some thinking to ensure the new
addition will fit in the large scheme of things. Below are some key points for contemplation.
Resource availability in the target region
The current SAP application servers' deployment region typically are in close proximity with the associated HLIs.
However, HLIs are offered in fewer locations than available Azure regions. When migrating the physical HLI to
Azure VM, it's also a good time to ‘fine-tune’ the proximity distance of all related services for performance
optimization. While doing so, one key consideration is to ensure the chosen region has all required resources. For
example, the availability of certain VM family or the offering of Azure Zones for high availability setup.
Virtual network
Customers need to choose whether to run the new HANA database in an existing virtual network or to create a
new one. The primary deciding factor is the current networking layout for the SAP landscape. Also when the
infrastructure goes from one-zone to two-zones deployment and uses PPG, it imposes architectural change. For
more information, see the article Azure PPG for optimal network latency with SAP application.
Security
Whether the new SAP HANA VM landing on a new or existing vnet/subnet, it represents a new business critical
service that requires safeguarding. Access control compliant with company info security policy ought to be
evaluated and deployed for this new class of service.
VM sizing recommendation
This migration is also an opportunity to right size your HANA compute engine. One can use HANA system views in
conjunction with HANA Studio to understand the system resource consumption, which allows for right sizing to
drive spending efficiency.
Storage
Storage performance is one of the factors that impacts SAP application user experience. Base on a given VM SKU,
there are minimum storage layout published SAP HANA Azure virtual machine storage configurations. We
recommend reviewing these minimum specs and comparing against the existing HLI system statistics to ensure
adequate IO capacity and performance for the new HANA VM.
If you configure PPG for the new HANA VM and its associated severs, submit a support ticket to inspect and ensure
the co-location of the storage and the VM. Since your backup solution may need to change, the storage cost should
also be revisited to avoid operational spending surprises.
Storage replication for disaster recovery
With HLI, storage replication was offered as the default option for the disaster recovery. This feature is not the
default option for SAP HANA on Azure VM. Consider HSR, backup/restore or other supported solutions satisfying
your business needs.
Availability Sets, Availability Zones, and Proximity Placement Groups
To shorten distance between the application layer and SAP HANA to keep network latency at a minimum, the new
database VM and the current SAP application servers should be placed in a PPG. Refer to Proximity Placement
Group to learn how Azure Availability Set and Availability Zones work with PPG for SAP deployments. If members
of the target HANA system are deployed in more than one Azure Zone, customers should have a clear view of the
latency profile of the chosen zones. The placement of SAP system components is optimal regarding proximal
distance between SAP application and the database. The public domain Availability zone latency test tool helps
make the measurement easier.
Backup strategy
Many customers are already using third-party backup solutions for SAP HANA on HLI. In that case only an
additional protected VM and HANA databases need to be configured. Ongoing HLI backup jobs can now be
unscheduled if the machine is being decommissioned after the migration. Azure Backup for SAP HANA on VM is
now generally available. See these links for detailed information about: Backup, Restore, Manage SAP HANA
backup in Azure VMs.
DR strategy
If your service level objectives accommodate a longer recovery time, a simple backup to blob storage and restore
in place or restore to a new VM is the simplest and least expensive DR strategy.
Like the large instance platform where HANA DR typically is done with HSR; On Azure VM, HSR is also the most
natural and native SAP HANA DR solution. Regardless of whether the source deployment is single-instance or
clustered, a replica of the source infrastructure is required in the DR region. This DR replica will be configured after
the primary HLI to VM migration is complete. The DR HANA DB will register to the primary SAP HANA on VM
instance as a secondary replication site.
SAP application server connectivity destination change
The HSR migration results in a new HANA DB host and hence a new DB hostname for the application layer, SAP
profiles need to be modified to reflect the new hostname. If the switching is done by name resolution preserving
the hostname, no profile change is required.
Operating system
The operating system images for HLI and VM, despite being on the same release level, SLES 12 SP4 for example,
aren't identical. Customers must validate the required packages, hot fixes, patches, kernel, and security fixes on the
HLI to install the same packages on the target. It's supported to use HSR to replicate from an older OS onto a VM
with a newer OS version. Verify the specific supported versions by reviewing SAP note 2763388.
New SAP license request
A simple call-out to request a new SAP license for the new HANA system now that it’s been migrated to VMs.
Service level agreement (SLA ) differences
The authors like to call out the difference of availability SLA between HLI and Azure VM. For example, clustered
HLIs HA pairs offer 99.99% availability. To achieve the same SLA, one must deploy VMs in availability zones. This
article describes availability with associated deployment architectures so customers can plan their target
infrastructure accordingly.
Migration strategy
In this document, we cover only the HANA System Replication approach for the migration from HLI to Azure VM.
Depends on the target storage solution deployed, the process differs slightly. The high-level steps are described
below.
VM with premium/ultra-disks for data
For VMs that are deployed with premium or ultra-disks, the standard SAP HANA system replication configuration
is applicable for setting up HSR. The SAP help article provides an overview of the steps involved in setting up
system replication, taking over a secondary system, failing back to the primary, and disabling system replication.
For the purpose of the migration, we will only need the setup, taking over, and disabling replication steps.
VM with ANF for data and log volumes
At a high level, the latest HLI storage snapshots of the full data and log volumes need to be copied to Azure Storage
where they are accessible and recoverable by the target HANA VM. The copy process can be done with any native
Linux copy tools.
IMPORTANT
Copying and data transfer can take hours depends on the HANA database size and network bandwidth. The bulk of the copy
process should be done in advance of the primary HANA DB downtime.
Post migration
The migration job is not done until we have safely decoupled any HLI-dependent services or connectivity to ensure
data integrity is preserved. Also, shut down unnecessary services. This section calls out a few top-of-mind items.
Decommissioning the HLI
After a successful migration of the HANA DB to Azure VM, ensure no productive business transactions run on the
HLI DB. However, keeping the HLI running for a period of time equals to its local backup retention window is a safe
practice ensuring speedier recovery if needed. Only then should the HLI blade be decommissioned. Customers
should contractually conclude their HLI commitments with Microsoft by contacting their Microsoft representatives.
Remove any proxy (ex: Iptables, BIGIP) configured for HLI
If a proxy service like the IPTables is used to route on-premises traffic to and from the HLI, it is no longer needed
after the successful migration to VM. However, this connectivity service should be kept for as long as the HLI blade
is still standing-by. Only shut down the service after the HLI blade is fully decommissioned.
Remove Global Reach for HLI
Global Reach is used to connect customers' ExpressRoute gateway with the HLI ExpressRoute gateway. It allows
customers' on-premises traffic to reach the HLI tenant directly without the use of a proxy service. This connection is
no longer needed in absence of the HLI unit after migration. Like the case of the IPTables proxy service,
GlobalReach should also be kept until the HLI blade is fully decommissioned.
Operating system subscription – move/reuse
As the VM servers are stood up and the HLI blades are decommissioned, the OS subscriptions can be replaced or
reused to avoid double paying of OS licenses.
Next steps
See these articles:
SAP HANA infrastructure configurations and operations on Azure.
SAP workloads on Azure: planning and deployment checklist.
Save on SAP HANA Large Instances with an Azure
reservation
11/2/2020 • 5 minutes to read • Edit Online
You can save on your SAP HANA Large Instances (HLI) costs when you pre-purchase Azure reservations for one or
three years. The reservation discount is applied to the provisioned HLI SKU that matches the reserved instance
purchased. This article helps you understand the things you need to know before you buy a reservation and how
to make the purchase.
By purchasing a reservation, you commit to usage of the HLI for one or three years. The HLI reserved capacity
purchase covers the compute and NFS storage that comes bundled with the SKU. The reservation doesn't include
software licensing costs such as the operating system, SAP, or additional storage costs. The reservation discount
automatically applies to the provisioned SAP HLI. When the reservation term ends, pay-as-you-go rates apply to
your provisioned resource.
Purchase considerations
An HLI SKU must be provisioned before going through the reserved capacity purchase. The reservation is paid for
up front or with monthly payments. The following restrictions apply to HLI reserved capacity:
Reservation discounts apply to Enterprise Agreement and Microsoft Customer Agreement subscriptions only.
Other subscriptions aren't supported.
Instance size flexibility isn't supported for HLI reserved capacity. A reservation applies only to the SKU and the
region that you purchase it for.
Self-service cancellation and exchange aren't supported.
The reserved capacity scope is a single scope, so it applies to a single subscription and resource group. The
purchased capacity can't be updated for use by another subscription.
You can't have a shared reservation scope for HANA reserved capacity. You can't split, merge, or update
reservation scope.
You can purchase a single HLI at a time using the reserved capacity API calls. Make additional API calls to buy
additional quantities.
You can purchase reserved capacity in the Azure portal or by using the REST API.
For more information about data fields and their descriptions, see HLI reservation fields.
The following example response resembles what you get returned. Note the value you returned for quoteId .
{
"properties": {
"currencyCode": "USD",
"netTotal": 313219.0,
"taxTotal": 0.0,
"isTaxIncluded": false,
"grandTotal": 313219.0,
"purchaseRequest": {
"sku": {
"name": "SAP_HANA_On_Azure_S224om"
},
"location": "eastus",
"properties": {
"billingScopeId": "/subscriptions/11111111-1111-1111-111111111111",
"term": "P1Y",
"billingPlan": "Upfront",
"quantity": 1,
"displayName": "testreservation_S224om",
"appliedScopes": [
"/subscriptions/11111111-1111-1111-111111111111"
],
"appliedScopeType": "Single",
"reservedResourceType": "SapHana",
"instanceFlexibility": "NotSupported"
}
},
"quoteId": "d0fd3a890795",
"isBillingPartnerManaged": true,
"reservationOrderId": "22222222-2222-2222-2222-222222222222",
"skuTitle": "SAP HANA on Azure Large Instances - S224om - US East",
"skuDescription": "SAP HANA on Azure Large Instances, S224om",
"pricingCurrencyTotal": {
"currencyCode": "USD",
"amount": 313219.0
}
}
}
Here's an example response. If the order is placed successfully, the provisioningState should be creating .
{
"id": "/providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-222222222222",
"type": "Microsoft.Capacity/reservationOrders",
"name": "22222222-2222-2222-2222-222222222222",
"etag": 1,
"properties": {
"displayName": "testreservation_S224om",
"requestDateTime": "2020-07-14T05:42:34.3528353Z",
"term": "P1Y",
"provisioningState": "Creating",
"reservations": [
{
"sku": {
"name": "SAP_HANA_On_Azure_S224om"
},
"id": "/providers/microsoft.capacity/reservationOrders22222222-2222-2222-2222-
222222222222/reservations/33333333-3333-3333-3333-3333333333333",
"type": "Microsoft.Capacity/reservationOrders/reservations",
"name": "22222222-2222-2222-2222-222222222222/33333333-3333-3333-3333-3333333333333",
"etag": 1,
"location": "eastus”
"properties": {
"appliedScopes": [
"/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123"
],
"appliedScopeType": "Single",
"quantity": 1,
"provisioningState": "Creating",
"displayName": " testreservation_S224om",
"effectiveDateTime": "2020-07-14T05:42:34.3528353Z",
"lastUpdatedDateTime": "2020-07-14T05:42:34.3528353Z",
"reservedResourceType": "SapHana",
"instanceFlexibility": "NotSupported",
"skuDescription": "SAP HANA on Azure Large Instances – S224om - US East",
"renew": true
}
}
],
"originalQuantity": 1,
"billingPlan": "Upfront"
}
}
Subscription The subscription used to pay for the reservation. The payment method on the subscription is
charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-
AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement. The charges are deducted from the monetary
commitment balance, if available, or charged as overage.
Scope The reservation's scope should be single scope.
Term One year or three years. It looks like P1Y or P3Y .
Quantity The number of instances being purchased for the reservation. The quantity to purchase is a single HLI at
a time. For additional reservations, repeat the API call with corresponding fields.
Troubleshoot errors
You might receive an error like the following example when you make a reservation purchase. The possible cause
is that the HLI isn't provisioned for purchase. If so, contact your Microsoft account team to get an HLI provisioned
before you try to make a reservation purchase.
{
"error": {
"code": "BadRequest",
"message": "Capacity check or quota check failed. Please select a different subscription or
location. You can also go to https://aka.ms/corequotaincrease to learn about quota increase."
}
}
Next steps
Learn about How to call Azure REST APIs with Postman and cURL.
See SKUs for SAP HANA on Azure (Large Instances) for the available SKU list and regions.
Installation of SAP HANA on Azure virtual machines
12/22/2020 • 7 minutes to read • Edit Online
Introduction
This guide helps you to point to the right resources to deploy HANA in Azure virtual machines successfully. This
guide is going to point you to documentation resources that you need to check before installing SAP HANA in an
Azure VM. So, that you are able to perform the right steps to end with a supported configuration of SAP HANA in
Azure VMs.
NOTE
This guide describes deployments of SAP HANA into Azure VMs. For information on how to deploy SAP HANA into HANA
large instances, see How to install and configure SAP HANA (Large Instances) on Azure.
Prerequisites
This guide also assumes that you're familiar with:
SAP HANA and SAP NetWeaver and how to install them on-premises.
How to install and operate SAP HANA and SAP application instances on Azure.
The concepts and procedures documented in:
Planning for SAP deployment on Azure, which includes Azure Virtual Network planning and Azure
Storage usage. See SAP NetWeaver on Azure Virtual Machines - Planning and implementation guide
Deployment principles and ways to deploy VMs in Azure. See Azure Virtual Machines deployment for
SAP
High availability concepts for SAP HANA as documented in SAP HANA high availability for Azure virtual
machines
NOTE
Not all the commands in the different sap-tune profiles or as described in the notes might run successfully on Azure.
Commands that would manipulate the power mode of VMs usually return with an error since the power mode of the
underlying Azure host hardware can not be manipulated.
Next steps
Read the documentation:
SAP HANA infrastructure configurations and operations on Azure
SAP HANA Azure virtual machine storage configurations
Deploy SAP S/4HANA or BW/4HANA on Azure
12/22/2020 • 5 minutes to read • Edit Online
This article describes how to deploy S/4HANA on Azure by using the SAP Cloud Appliance Library (SAP CAL) 3.0.
To deploy other SAP HANA-based solutions, such as BW/4HANA, follow the same steps.
NOTE
For more information about the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the SAP
Cloud Appliance Library 3.0.
NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.
NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.
2. Create a new SAP CAL account. The Accounts page shows three choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.
c. Windows Azure operated by 21Vianet is an option in China that uses the classic deployment model.
To deploy in the Resource Manager model, select Microsoft Azure .
3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize . The following
page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:
6. Click Accept . If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add .
8. To associate your account with the user that you use to sign in to the SAP CAL, click Review .
9. To create the association between your user and the newly created SAP CAL account, click Create .
You successfully created an SAP CAL account that is able to:
Use the Resource Manager deployment model.
Deploy SAP systems into your Azure subscription.
Now you can start to deploy S/4HANA into your user subscription in Azure.
NOTE
Before you continue, determine whether you have Azure vCPU quotas for Azure H-Series VMs. At the moment, the SAP CAL
uses H-Series VMs of Azure to deploy some of the SAP HANA-based solutions. Your Azure subscription might not have any
H-Series vCPU quotas for H-Series. If so, you might need to contact Azure support to get a quota of at least 16 H-Series
vCPUs.
NOTE
When you deploy a solution on Azure in the SAP CAL, you might find that you can choose only one Azure region. To deploy
into Azure regions other than the one suggested by the SAP CAL, you need to purchase a CAL subscription from SAP. You
also might need to open a message with SAP to have your CAL account enabled to deliver into Azure regions other than the
ones initially suggested.
Deploy a solution
Let's deploy a solution from the Solutions page of the SAP CAL. The SAP CAL has two sequences to deploy:
A basic sequence that uses one page to define the system to be deployed
An advanced sequence that gives you certain choices on VM sizes
We demonstrate the basic path to deployment here.
1. On the Account Details page, you need to:
a. Select an SAP CAL account. (Use an account that is associated to deploy with the Resource Manager
deployment model.)
b. Enter an instance Name .
c. Select an Azure Region . The SAP CAL suggests a region. If you need another Azure region and you don't
have an SAP CAL subscription, you need to order a CAL subscription with SAP.
d. Enter a master Password for the solution of eight or nine characters. The password is used for the
administrators of the different components.
3. In the Private Key dialog box, click Store to store the private key in the SAP CAL. To use password
protection for the private key, click Download .
4. Read the SAP CAL Warning message, and click OK .
Now the deployment takes place. After some time, depending on the size and complexity of the solution (the
SAP CAL provides an estimate), the status is shown as active and ready for use.
5. To find the virtual machines collected with the other associated resources in one resource group, go to the
Azure portal:
6. On the SAP CAL portal, the status appears as Active . To connect to the solution, click Connect . Different
options to connect to the different components are deployed within this solution.
7. Before you can use one of the options to connect to the deployed systems, click Getting Star ted Guide .
The documentation names the users for each of the connectivity methods. The passwords for those users
are set to the master password you defined at the beginning of the deployment process. In the
documentation, other more functional users are listed with their passwords, which you can use to sign in to
the deployed system.
For example, if you use the SAP GUI that's preinstalled on the Windows Remote Desktop machine, the S/4
system might look like this:
Or if you use the DBACockpit, the instance might look like this:
Within a few hours, a healthy SAP S/4 appliance is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
SAP HANA infrastructure configurations and
operations on Azure
12/22/2020 • 20 minutes to read • Edit Online
This document provides guidance for configuring Azure infrastructure and operating SAP HANA systems that are
deployed on Azure native virtual machines (VMs). The document also includes configuration information for SAP
HANA scale-out for the M128s VM SKU. This document is not intended to replace the standard SAP
documentation, which includes the following content:
SAP administration guide
SAP installation guides
SAP notes
Prerequisites
To use this guide, you need basic knowledge of the following Azure components:
Azure virtual machines
Azure networking and virtual networks
Azure Storage
To learn more about SAP NetWeaver and other SAP components on Azure, see the SAP on Azure section of the
Azure documentation.
NOTE
For non-production scenarios, use the VM types that are listed in the SAP note #1928533. For the usage of Azure VMs for
production scenarios, check for SAP HANA certified VMs in the SAP published Certified IaaS Platforms list.
IMPORTANT
In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image from the Azure VM image gallery. In
order to read the details, read the article Memory optimized virtual machine sizes.
IMPORTANT
Out of functionality, but more important out of performance reasons, it is not supported to configure Azure Network
Virtual Appliances in the communication path between the SAP application and the DBMS layer of a SAP NetWeaver, Hybris
or S/4HANA based SAP system. The communication between the SAP application layer and the DBMS layer needs to be a
direct one. The restriction does not include Azure ASG and NSG rules as long as those ASG and NSG rules allow a direct
communication. Further scenarios where NVAs are not supported are in communication paths between Azure VMs that
represent Linux Pacemaker cluster nodes and SBD devices as described in High availability for SAP NetWeaver on Azure
VMs on SUSE Linux Enterprise Server for SAP applications. Or in communication paths between Azure VMs and Windows
Server SOFS set up as described in Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in
Azure. NVAs in communication paths can easily double the network latency between two communication partners, can
restrict throughput in critical paths between the SAP application layer and the DBMS layer. In some scenarios observed with
customers, NVAs can cause Pacemaker Linux clusters to fail in cases where communications between the Linux Pacemaker
cluster nodes need to communicate to their SBD device through an NVA.
IMPORTANT
Another design that is NOT supported is the segregation of the SAP application layer and the DBMS layer into different
Azure virtual networks that are not peered with each other. It is recommended to segregate the SAP application layer and
DBMS layer using subnets within an Azure virtual network instead of using different Azure virtual networks. If you decide
not to follow the recommendation, and instead segregate the two layers into different virtual network, the two virtual
networks need to be peered. Be aware that network traffic between two peered Azure virtual networks are subject of
transfer costs. With the huge data volume in many Terabytes exchanged between the SAP application layer and DBMS layer
substantial costs can be accumulated if the SAP application layer and DBMS layer is segregated between two peered Azure
virtual networks.
When you install the VMs to run SAP HANA, the VMs need:
Two virtual NICs installed: one NIC to connect to the management subnet, and one NIC to connect from the
on-premises network or other networks, to the SAP HANA instance in the Azure VM.
Static private IP addresses that are deployed for both virtual NICs.
NOTE
You should assign static IP addresses through Azure means to individual vNICs. You should not assign static IP addresses
within the guest OS to a vNIC. Some Azure services like Azure Backup Service rely on the fact that at least the primary vNIC
is set to DHCP and not to static IP addresses. See also the document Troubleshoot Azure virtual machine backup. If you
need to assign multiple static IP addresses to a VM, you need to assign multiple vNICs to a VM.
However, for deployments that are enduring, you need to create a virtual datacenter network architecture in
Azure. This architecture recommends the separation of the Azure VNet Gateway that connects to on-premises into
a separate Azure VNet. This separate VNet should host all the traffic that leaves either to on-premises or to the
internet. This approach allows you to deploy software for auditing and logging traffic that enters the virtual
datacenter in Azure in this separate hub VNet. So you have one VNet that hosts all the software and
configurations that relates to in- and outgoing traffic to your Azure deployment.
The articles Azure Virtual Datacenter: A Network Perspective and Azure Virtual Datacenter and the Enterprise
Control Plane give more information on the virtual datacenter approach and related Azure VNet design.
NOTE
Traffic that flows between a hub VNet and spoke VNet using Azure VNet peering is subject of additional costs. Based on
those costs, you might need to consider making compromises between running a strict hub and spoke network design and
running multiple Azure ExpressRoute Gateways that you connect to 'spokes' in order to bypass VNet peering. However,
Azure ExpressRoute Gateways introduce additional costs as well. You also may encounter additional costs for third-party
software you use for network traffic logging, auditing, and monitoring. Dependent on the costs for data exchange through
VNet peering on the one side and costs created by additional Azure ExpressRoute Gateways and additional software
licenses, you may decide for micro-segmentation within one VNet by using subnets as isolation unit instead of VNets.
For an overview of the different methods for assigning IP addresses, see IP address types and allocation methods
in Azure.
For VMs running SAP HANA, you should work with static IP addresses assigned. Reason is that some
configuration attributes for HANA reference IP addresses.
Azure Network Security Groups (NSGs) are used to direct traffic that's routed to the SAP HANA instance or the
jumpbox. The NSGs and eventually Application Security Groups are associated to the SAP HANA subnet and the
Management subnet.
The following image shows an overview of a rough deployment schema for SAP HANA following a hub and
spoke VNet architecture:
To deploy SAP HANA in Azure without a site-to-site connection, you still want to shield the SAP HANA instance
from the public internet and hide it behind a forward proxy. In this basic scenario, the deployment relies on Azure
built-in DNS services to resolve hostnames. In a more complex deployment where public-facing IP addresses are
used, Azure built-in DNS services are especially important. Use Azure NSGs and Azure NVAs to control, monitor
the routing from the internet into your Azure VNet architecture in Azure. The following image shows a rough
schema for deploying SAP HANA without a site-to-site connection in a hub and spoke VNet architecture:
Another description on how to use Azure NVAs to control and monitor access from Internet without the hub and
spoke VNet architecture can be found in the article Deploy highly available network virtual appliances.
NOTE
Azure VM scale-out deployments of SAP HANA with standby node are only possible using the Azure NetApp Files storage.
No other SAP HANA certified Azure storage allows the configuration of SAP HANA standby nodes
NOTE
SAP recommends separating network traffic to the client/application side and intra-node traffic as described in this
document. Therefore putting an architecture in place as shown in the last graphics is recommended. Also consult your
security and compliance team for requirements that deviate from the recommendation
From a networking point of view the minimum required network architecture would look like:
Installing SAP HANA scale -out n Azure
Installing a scale-out SAP configuration, you need to perform rough steps of:
Deploying new or adapting an existing Azure VNet infrastructure
Deploying the new VMs using Azure Managed Premium Storage, Ultra disk volumes, and/or NFS volumes
based on ANF
Adapt network routing to make sure that, for example, intra-node communication between VMs is not
routed through an NVA.
Install the SAP HANA master node.
Adapt configuration parameters of the SAP HANA master node
Continue with the installation of the SAP HANA worker nodes
Installation of SAP HANA in scale-out configuration
As your Azure VM infrastructure is deployed, and all other preparations are done, you need to install the SAP
HANA scale-out configurations in these steps:
Install the SAP HANA master node according to SAP's documentation
In case of using Azure Premium Storage or Ultra disk storage with non-shared disks of /hana/data and
/hana/log, you need to change the global.ini file and add the parameter 'basepath_shared = no' to the
global.ini file. This parameter enables SAP HANA to run in scale-out without 'shared' /hana/data and
/hana/log volumes between the nodes. Details are documented in SAP Note #2080991. If you are using NFS
volumes based on ANF for /hana/data and /hana/log, you don't need to make this change
After the eventual change in the global.ini parameter, restart the SAP HANA instance
Add additional worker nodes. See also
https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-
US/0d9fe701e2214e98ad4f8721f6558c34.html. Specify the internal network for SAP HANA inter-node
communication during the installation or afterwards using, for example, the local hdblcm. For more detailed
documentation, see also SAP Note #2183363.
Details to set up an SAP HANA scale-out system with standby node on SUSE Linux is described in detail in Deploy
a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux
Enterprise Server. Equivalent documentation for Red Hat can be found in the article Deploy a SAP HANA scale-out
system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux.
SA P H A N A VM T Y P E DT 2. 0 VM T Y P E
M128ms M64-32ms
M128s M64-32ms
M64ms E32sv3
M64s E32sv3
All combinations of SAP HANA-certified M-series VMs with supported DT 2.0 VMs (M64-32ms and E32sv3) are
possible.
Azure networking and SAP HANA DT 2.0
Installing DT 2.0 on a dedicated VM requires network throughput between the DT 2.0 VM and the SAP HANA VM
of 10 Gb minimum. Therefore it's mandatory to place all VMs within the same Azure Vnet and enable Azure
accelerated networking.
See additional information about Azure accelerated networking here
VM Storage for SAP HANA DT 2.0
According to DT 2.0 best practice guidance, the disk IO throughput should be minimum 50 MB/sec per physical
core. Looking at the spec for the two Azure VM types, which are supported for DT 2.0 the maximum disk IO
throughput limit for the VM look like:
E32sv3 : 768 MB/sec (uncached) which means a ratio of 48 MB/sec per physical core
M64-32ms : 1000 MB/sec (uncached) which means a ratio of 62.5 MB/sec per physical core
It is required to attach multiple Azure disks to the DT 2.0 VM and create a software raid (striping) on OS level to
achieve the max limit of disk throughput per VM. A single Azure disk cannot provide the throughput to reach the
max VM limit in this regard. Azure Premium storage is mandatory to run DT 2.0.
Details about available Azure disk types can be found here
Details about creating software raid via mdadm can be found here
Details about configuring LVM to create a striped volume for max throughput can be found here
Depending on size requirements, there are different options to reach the max throughput of a VM. Here are
possible data volume disk configurations for every DT 2.0 VM type to achieve the upper VM throughput limit. The
E32sv3 VM should be considered as an entry level for smaller workloads. In case it should turn out that it's not
fast enough it might be necessary to resize the VM to M64-32ms. As the M64-32ms VM has much memory, the
IO load might not reach the limit especially for read intensive workloads. Therefore fewer disks in the stripe set
might be sufficient depending on the customer specific workload. But to be on the safe side the disk
configurations below were chosen to guarantee the maximum throughput:
M64-32ms 4 x P50 -> 16 TB 4 x P40 -> 8 TB 5 x P30 -> 5 TB 7 x P20 -> 3.5 TB 8 x P15 -> 2 TB
E32sv3 3 x P50 -> 12 TB 3 x P40 -> 6 TB 4 x P30 -> 4 TB 5 x P20 -> 2.5 TB 6 x P15 -> 1.5 TB
Especially in case the workload is read-intense it could boost IO performance to turn on Azure host cache "read-
only" as recommended for the data volumes of database software. Whereas for the transaction log Azure host
disk cache must be "none".
Regarding the size of the log volume a recommended starting point is a heuristic of 15% of the data size. The
creation of the log volume can be accomplished by using different Azure disk types depending on cost and
throughput requirements. For the log volume, high I/O throughput is required. In case of using the VM type M64-
32ms it is mandatory to enable Write Accelerator. Azure Write Accelerator provides optimal disk write latency for
the transaction log (only available for M-series). There are some items to consider though like the maximum
number of disks per VM type. Details about Write Accelerator can be found here
Here are a few examples about sizing the log volume:
LO G VO L UM E A N D DISK T Y P E C O N F IG LO G VO L UM E A N D DISK T Y P E C O N F IG
DATA VO L UM E SIZ E A N D DISK T Y P E 1 2
Like for SAP HANA scale-out, the /hana/shared directory has to be shared between the SAP HANA VM and the DT
2.0 VM. The same architecture as for SAP HANA scale-out using dedicated VMs, which act as a highly available
NFS server is recommended. In order to provide a shared backup volume, the identical design can be used. But it
is up to the customer if HA would be necessary or if it is sufficient to just use a dedicated VM with enough storage
capacity to act as a backup server.
Links to DT 2.0 documentation
SAP HANA Dynamic Tiering installation and update guide
SAP HANA Dynamic Tiering tutorials and resources
SAP HANA Dynamic Tiering PoC
SAP HANA 2.0 SPS 02 dynamic tiering enhancements
Be sure to install SAProuter in a separate VM and not in your Jumpbox VM. The separate VM must have a static IP
address. To connect your SAProuter to the SAProuter that is hosted by SAP, contact SAP for an IP address. (The
SAProuter that is hosted by SAP is the counterpart of the SAProuter instance that you install on your VM.) Use the
IP address from SAP to configure your SAProuter instance. In the configuration settings, the only necessary port is
TCP port 3299.
For more information on how to set up and maintain remote support connections through SAProuter, see the SAP
documentation.
High-availability with SAP HANA on Azure native VMs
If you're running SUSE Linux Enterprise Server or Red Hat, you can establish a Pacemaker cluster with STONITH
devices. You can use the devices to set up an SAP HANA configuration that uses synchronous replication with
HANA System Replication and automatic failover. For more information listed in the 'next steps' section.
Next Steps
Get familiar with the articles as listed
SAP HANA Azure virtual machine storage configurations
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE
Linux Enterprise Server
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red
Hat Enterprise Linux
High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server
High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
SAP HANA Azure virtual machine storage
configurations
12/22/2020 • 23 minutes to read • Edit Online
Azure provides different types of storage that are suitable for Azure VMs that are running SAP HANA. The
SAP HANA cer tified Azure storage types that can be considered for SAP HANA deployments list like:
Azure premium SSD or premium storage
Ultra disk
Azure NetApp Files
To learn about these disk types, see the article Azure Storage types for SAP workload and Select a disk type
Azure offers two deployment methods for VHDs on Azure Standard and premium storage. We expect you to
take advantage of Azure managed disk for Azure block storage deployments.
For a list of storage types and their SLAs in IOPS and storage throughput, review the Azure documentation
for managed disks.
IMPORTANT
Independent of the Azure storage type chosen, the file system that is used on that storage needs to be supported by
SAP for the specific operating system and DBMS. SAP support note #2972496 lists the supported file systems for
different operating systems and databases, including SAP HANA. This applies to all volumes SAP HANA might access
for reading and writing for whatever task. Specifically using NFS on Azure for SAP HANA, additional restrictions of NFS
versions apply as stated later in this article
The minimum SAP HANA certified conditions for the different storage types are:
Azure premium storage - /hana/log is required to be supported by Azure Write Accelerator. The
/hana/data volume could be placed on premium storage without Azure Write Accelerator or on Ultra
disk
Azure Ultra disk at least for the /hana/log volume. The /hana/data volume can be placed on either
premium storage without Azure Write Accelerator or in order to get faster restart times Ultra disk
NFS v4.1 volumes on top of Azure NetApp Files for /hana/log and /hana/data . The volume of
/hana/shared can use NFS v3 or NFS v4.1 protocol
Some of the storage types can be combined. For example, it is possible to put /hana/data onto premium
storage and /hana/log can be placed on Ultra disk storage in order to get the required low latency. If you
use a volume based on ANF for /hana/data , /hana/log volume needs to be based on NFS on top of ANF as
well. Using NFS on top of ANF for one of the volumes (like /hana/data) and Azure premium storage or Ultra
disk for the other volume (like /hana/log ) is not suppor ted .
In the on-premises world, you rarely had to care about the I/O subsystems and its capabilities. Reason was
that the appliance vendor needed to make sure that the minimum storage requirements are met for SAP
HANA. As you build the Azure infrastructure yourself, you should be aware of some of these SAP issued
requirements. Some of the minimum throughput characteristics that SAP is recommending, are:
Read/write on /hana/log of 250 MB/sec with 1 MB I/O sizes
Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes
Write activity of at least 250 MB/sec for /hana/data with 16 MB and 64 MB I/O sizes
Given that low storage latency is critical for DBMS systems, even as DBMS, like SAP HANA, keep data in-
memory. The critical path in storage is usually around the transaction log writes of the DBMS systems. But
also operations like writing savepoints or loading data in-memory after crash recovery can be critical.
Therefore, it is mandator y to leverage Azure premium storage, Ultra disk, or ANF for /hana/data and
/hana/log volumes.
Some guiding principles in selecting your storage configuration for HANA can be listed like:
Decide on the type of storage based on Azure Storage types for SAP workload and Select a disk type
The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM
storage throughput is documented in the article Memory optimized virtual machine sizes
When deciding for the storage configuration, try to stay below the overall throughput of the VM with your
/hana/data volume configuration. Writing savepoints, SAP HANA can be aggressive issuing I/Os. It is
easily possible to push up to throughput limits of your /hana/data volume when writing a savepoint. If
your disk(s) that build the /hana/data volume have a higher throughput than your VM allows, you could
run into situations where throughput utilized by the savepoint writing is interfering with throughput
demands of the redo log writes. A situation that can impact the application throughput
If you are using Azure premium storage, the least expensive configuration is to use logical volume
managers to build stripe sets to build the /hana/data and /hana/log volumes
IMPORTANT
The suggestions for the storage configurations are meant as directions to start with. Running workload and analyzing
storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided.
You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput
than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput.
In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS
required and least expensive configuration, Azure offers enough different storage types with different capabilities and
different price points to find and adjust to the right compromise for you and your HANA workload.
IMPORTANT
When using Azure premium storage, the usage of Azure Write Accelerator for the /hana/log volume is mandatory.
Write Accelerator is available for premium storage and M-Series and Mv2-Series VMs only. Write Accelerator is not
working in combination with other Azure VM families, like Esv3 or Edsv4.
The caching recommendations for Azure premium disks below are assuming the I/O characteristics for SAP
HANA that list like:
There hardly is any read workload against the HANA data files. Exceptions are large sized I/Os after restart
of the HANA instance or when data is loaded into HANA. Another case of larger read I/Os against data
files can be HANA database backups. As a result read caching mostly does not make sense since in most
of the cases, all data file volumes need to be read completely.
Writing against the data files is experienced in bursts based by HANA savepoints and HANA crash
recovery. Writing savepoints is asynchronous and are not holding up any user transactions. Writing data
during crash recovery is performance critical in order to get the system responding fast again. However,
crash recovery should be rather exceptional situations
There are hardly any reads from the HANA redo files. Exceptions are large I/Os when performing
transaction log backups, crash recovery, or in the restart phase of a HANA instance.
Main load against the SAP HANA redo log file is writes. Dependent on the nature of workload, you can
have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more. Write latency against the SAP
HANA redo log is performance critical.
All writes need to be persisted on disk in a reliable fashion
Recommendation: As a result of these obser ved I/O patterns by SAP HANA, the caching for the
different volumes using Azure premium storage should be set like:
/hana/data - no caching or read caching
/hana/log - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be
enabled
/hana/shared - read caching
OS disk - don't change default caching that is set by Azure at creation time of the VM
If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define
stripe sizes. These sizes differ between /hana/data and /hana/log . Recommendation: As stripe sizes
the recommendation is to use:
256 KB for /hana/data
64 KB for /hana/log
NOTE
The stripe size for /hana/data got changed from earlier recommendations calling for 64 KB or 128 KB to 256 KB
based on customer experiences with more recent Linux versions. The size of 256 KB is providing slightly better
performance. We also changed the recommendation for stripe sizes of /hana/log from 32 KB to 64 KB in order to get
enough throughput with larger I/O sizes.
NOTE
You don't need to configure any redundancy level using RAID volumes since Azure block storage keeps three images
of a VHD. The usage of a stripe set with Azure premium disks is purely to configure volumes that provide sufficient
IOPS and/or I/O throughput.
Accumulating a number of Azure VHDs underneath a stripe set, is accumulative from an IOPS and storage
throughput side. So, if you put a stripe set across over 3 x P30 Azure premium storage disks, it should give
you three times the IOPS and three times the storage throughput of a single Azure premium Storage P30
disk.
IMPORTANT
In case you are using LVM or mdadm as volume manager to create stripe sets across multiple Azure premium disks,
the three SAP HANA FileSystems /data, /log and /shared must not be put in a default or root volume group. It is
highly recommended to follow the Linux Vendors guidance which is typically to create individual Volume Groups for
/data, /log and /shared.
IMPORTANT
SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write Accelerator for the
/hana/log volume. As a result, production scenario SAP HANA deployments on Azure M-Series virtual machines are
expected to be configured with Azure Write Accelerator for the /hana/log volume.
NOTE
In scenarios that involve Azure premium storage, we are implementing burst capabilities into the configuration. As you
are using storage test tools of whatever shape or form, keep the way Azure premium disk bursting works in mind.
Running the storage tests delivered through the SAP HWCCT or HCMT tool, we are not expecting that all tests will
pass the criteria since some of the tests will exceed the bursting credits you can accumulate. Especially when all the
tests run sequentially without break.
NOTE
For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the SAP
documentation for IAAS.
M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / DA T H RO UGH T H RO UGH B URST
VM SK U RA M P UT TA P UT P UT IO P S IO P S
M32ts 192 GiB 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000
M32ls 256 GiB 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000
M64ls 512 GiB 1,000 4 x P10 400 MBps 680 MBps 2,000 14,000
MBps
M64s 1,000 GiB 1,000 4 x P15 500 MBps 680 MBps 4,400 14,000
MBps
M64ms 1,750 GiB 1,000 4 x P20 600 MBps 680 MBps 9,200 14,000
MBps
M128s 2,000 GiB 2,000 4 x P20 600 MBps 680 MBps 9,200 14,000
MBps
M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / LO T H RO UGH T H RO UGH B URST
VM SK U RA M P UT G VO L UM E P UT P UT IO P S IO P S
M32ts 192 GiB 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
M32ls 256 GiB 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / LO T H RO UGH T H RO UGH B URST
VM SK U RA M P UT G VO L UM E P UT P UT IO P S IO P S
M64ls 512 GiB 1,000 3 x P10 300 MBps 510 MBps 1,500 10,500
MBps
M64s 1,000 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
M64ms 1,750 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
M128s 2,000 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
M128ms 3,800 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
M208s_v2 2,850 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
M208ms_v 5,700 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
2 MBps
M416s_v2 5,700 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
M416ms_v 11,400 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
2 MBps
M A X. VM I/ O
VM SK U RA M T H RO UGH P UT / H A N A / SH A RED / RO OT VO L UM E / USR/ SA P
Check whether the storage throughput for the different suggested volumes meets the workload that you
want to run. If the workload requires higher volumes for /hana/data and /hana/log , you need to increase
the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS
and I/O throughput within the limits of the Azure virtual machine type.
Azure Write Accelerator only works in conjunction with Azure managed disks. So at least the Azure premium
storage disks forming the /hana/log volume need to be deployed as managed disks. More detailed
instructions and restrictions of Azure Write Accelerator can be found in the article Write Accelerator.
For the HANA certified VMs of the Azure Esv3 family and the Edsv4, you need to ANF for the /hana/data
and /hana/log volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage
only for the /hana/log volume. As a result, the configurations for the /hana/data volume on Azure
premium storage could look like:
M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / DA T H RO UGH T H RO UGH B URST
VM SK U RA M P UT TA P UT P UT IO P S IO P S
E20ds_v4 160 GiB 480 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
E32ds_v4 256 GiB 768 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
E48ds_v4 384 GiB 1,152 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
E64ds_v4 504 GiB 1,200 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps
E64s_v3 432 GiB 1,200 3 x P15 375 MBps 510 MBps 3,300 10,500
MB/s
For the other volumes, including /hana/log on Ultra disk, the configuration could look like:
M A X. VM /HANA/L
I/ O /HANA/L O G I/ O
T H RO UG OG T H RO UG /HANA/L /HANA/S / RO OT / USR/ SA
VM SK U RA M H P UT VO L UM E H P UT O G IO P S H A RED VO L UM E P
NOTE
Ultra disk is not yet present in all the Azure regions and is also not yet supporting all VM types listed below. For
detailed information where Ultra disk is available and which VM families are supported, check the article What disk
types are available in Azure?.
NOTE
Azure Ultra disk is enforcing a minimum of 2 IOPS per Gigabyte capacity of a disk
M A X. VM /HANA/D /HANA/L
I/ O /HANA/ ATA I/ O /HANA/L O G I/ O
T H RO UG DATA T H RO UG /HANA/D OG T H RO UG /HANA/L
VM SK U RA M H P UT VO L UM E H P UT ATA IO P S VO L UM E H P UT O G IO P S
E32ds_v4 256 GiB 768 300 GB 400 2,500 128 GB 250 1,800
MB/s MBps MBps
E48ds_v4 384 GiB 1152 460 GB 400 3,000 192 GB 250 1,800
MB/s MBps MBps
E64ds_v4 504 GiB 1200 610 GB 400 3,500 256 GB 250 1,800
MB/s MBps MBps
E64s_v3 432 GiB 1,200 610 GB 400 3,500 220 GB 250 MB 1,800
MB/s MBps
M32ls 256 GiB 500 300 GB 400 2,500 256 GB 250 1,800
MB/s MBps MBps
M64ls 512 GiB 1,000 620 GB 400 3,500 256 GB 250 1,800
MB/s MBps MBps
The values listed are intended to be a star ting point and need to be evaluated against the real
demands. The advantage with Azure Ultra disk is that the values for IOPS and throughput can be adapted
without the need to shut down the VM or halting the workload applied to the system.
NOTE
So far, storage snapshots with Ultra disk storage is not available. This blocks the usage of VM snapshots with Azure
Backup Services
/ H A N A / DA
TA A N D
/ H A N A / LO
G
M A X. VM ST RIP ED
I/ O W IT H LVM
T H RO UGH OR / H A N A / SH / RO OT C O M M EN T
VM SK U RA M P UT M DA DM A RED VO L UM E / USR/ SA P S
1 Azure Write Accelerator can't be used with the Ev4 and Ev4 VM families. As a result of using Azure premium
storage the I/O latency will not be less than 1ms
2 The VM family supports Azure Write Accelerator, but there is a potential that the IOPS limit of Write
accelerator could limit the disk configurations IOPS capabilities
In the case of combining the data and log volume for SAP HANA, the disks building the striped volume
should not have read cache or read/write cache enabled.
There are VM types listed that are not certified with SAP and as such not listed in the so called SAP HANA
hardware directory. Feedback of customers was that those non-listed VM types were used successfully for
some non-production tasks.
Next steps
For more information, see:
SAP HANA High Availability guide for Azure virtual machines.
NFS v4.1 volumes on Azure NetApp Files for SAP
HANA
12/22/2020 • 11 minutes to read • Edit Online
Azure NetApp Files provides native NFS shares that can be used for /hana/shared , /hana/data , and /hana/log
volumes. Using ANF-based NFS shares for the /hana/data and /hana/log volumes requires the usage of the v4.1
NFS protocol. The NFS protocol v3 is not supported for the usage of /hana/data and /hana/log volumes when
basing the shares on ANF.
IMPORTANT
The NFS v3 protocol implemented on Azure NetApp Files is not supported to be used for /hana/data and /hana/log . The
usage of the NFS 4.1 is mandatory for /hana/data and /hana/log volumes from a functional point of view. Whereas for the
/hana/shared volume the NFS v3 or the NFS v4.1 protocol can be used from a functional point of view.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware of the following important
considerations:
The minimum capacity pool is 4 TiB
The minimum volume size is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes are mounted, must be in the
same Azure Virtual Network or in peered virtual networks in the same region
It is important to have the virtual machines deployed in close proximity to the Azure NetApp storage for low
latency.
The selected virtual network must have a subnet, delegated to Azure NetApp Files
Make sure the latency from the database server to the ANF volume is measured and below 1 millisecond
The throughput of an Azure NetApp volume is a function of the volume quota and Service level, as documented
in Service level for Azure NetApp Files. When sizing the HANA Azure NetApp volumes, make sure the resulting
throughput meets the HANA system requirements
Try to “consolidate” volumes to achieve more performance in a larger Volume for example, use one volume for
/sapmnt, /usr/sap/trans, … if possible
Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read
Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all
Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
The User ID for sid adm and the Group ID for sapsys on the virtual machines must match the configuration in
Azure NetApp Files.
IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines
and the Azure NetApp Files volumes are deployed in close proximity.
IMPORTANT
If there is a mismatch between User ID for sid adm and the Group ID for sapsys between the virtual machine and the
Azure NetApp configuration, the permissions for files on Azure NetApp volumes, mounted to the VM, would be be displayed
as nobody . Make sure to specify the correct User ID for sid adm and the Group ID for sapsys , when on-boarding a new
system to Azure NetApp Files.
It is important to understand that the data is written to the same SSDs in the storage backend. The performance
quota from the capacity pool was created to be able to manage the environment. The Storage KPIs are equal for all
HANA database sizes. In almost all cases, this assumption does not reflect the reality and the customer expectation.
The size of HANA Systems does not necessarily mean that a small system requires low storage throughput – and a
large system requires high storage throughput. But generally we can expect higher throughput requirements for
larger HANA database instances. As a result of SAP's sizing rules for the underlying hardware such larger HANA
instances also provide more CPU resources and higher parallelism in tasks like loading data after an instances
restart. As a result the volume sizes should be adopted to the customer expectations and requirements. And not
only driven by pure capacity requirements.
As you design the infrastructure for SAP in Azure you should be aware of some minimum storage throughput
requirements (for productions Systems) by SAP, which translate into minimum throughput characteristics of:
VO L UM E T Y P E A N D I/ O M IN IM UM K P I DEM A N DED
TYPE B Y SA P P REM IUM SERVIC E L EVEL ULT RA SERVIC E L EVEL
Since all three KPIs are demanded, the /hana/data volume needs to be sized toward the larger capacity to fulfill
the minimum read requirements.
For HANA systems, which are not requiring high bandwidth, the ANF volume sizes can be smaller. And in case a
HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are
defined for backup volumes. However the backup volume throughput is essential for a well performing
environment. Log – and Data volume performance must be designed to the customer expectations.
IMPORTANT
Independent of the capacity you deploy on a single NFS volume, the throughput, is expected to plateau in the range of 1.2-
1.4 GB/sec bandwidth leveraged by a consumer in a virtual machine. This has to do with the underlying architecture of the
ANF offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the
article Performance benchmark test results for Azure NetApp Files were conducted against one shared NFS volume with
multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP. Where
we measure throughput from a single VM against an NFS volume. Hosted on ANF.
To meet the SAP minimum throughput requirements for data and log, and according to the guidelines for
/hana/shared , the recommended sizes would look like:
SIZ E SIZ E
VO L UM E P REM IUM STO RA GE T IER ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L
NOTE
The Azure NetApp Files, sizing recommendations stated in this document are targeting the minimum requirements SAP
expresses towards their infrastructure providers. In real customer deployments and workload scenarios, that may not be
enough. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.
Therefore you could consider to deploy similar throughput for the ANF volumes as listed for Ultra disk storage
already. Also consider the sizes for the sizes listed for the volumes for the different VM SKUs as done in the Ultra
disk tables already.
TIP
You can re-size Azure NetApp Files volumes dynamically, without the need to unmount the volumes, stop the virtual
machines or stop SAP HANA. That allows flexibility to meet your application both expected and unforeseen throughput
demands.
Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1
volumes that are hosted in ANF is published in SAP HANA scale-out with standby node on Azure VMs with Azure
NetApp Files on SUSE Linux Enterprise Server.
Availability
ANF system updates and upgrades are applied without impacting the customer environment. The defined SLA is
99.99%.
Backup
Besides streaming backups and Azure Back service backing up SAP HANA databases as described in the article
Backup guide for SAP HANA on Azure Virtual Machines, Azure NetApp Files opens the possibility to perform
storage-based snapshot backups.
SAP HANA supports:
Storage-based snapshot backups from SAP HANA 1.0 SPS7 on
Storage-based snapshot backup support for Multi Database Container (MDC) HANA environments from SAP
HANA 2.0 SPS4 on
Creating storage-based snapshot backups is a simple four-step procedure,
1. Creating a HANA (internal) database snapshot - an activity you or tools need to perform
2. SAP HANA writes data to the datafiles to create a consistent state on the storage - HANA performs this step as a
result of creating a HANA snapshot
3. Create a snapshot on the /hana/data volume on the storage - a step you or tools need to perform. There is no
need to perform a snapshot on the /hana/log volume
4. Delete the HANA (internal) database snapshot and resume normal operation - a step you or tools need to
perform
WARNING
Missing the last step or failing to perform the last step has severe impact on SAP HANA's memory demand and can lead to a
halt of SAP HANA
BACKUP DATA FOR FULL SYSTEM CLOSE SNAPSHOT BACKUP_ID 47110815 SUCCESSFUL SNAPSHOT-2020-08-18:11:00';
This snapshot backup procedure can be managed in a variety of ways, using various tools. One example is the
python script “ntaphana_azure.py” available on GitHub https://github.com/netapp/ntaphana This is sample code,
provided “as-is” without any maintenance or support.
Cau t i on
A snapshot in itself is not a protected backup since it is located on the same physical storage as the volume you
just took a snapshot of. It is mandatory to “protect” at least one snapshot per day to a different location. This can be
done in the same environment, in a remote Azure region or on Azure Blob storage.
For users of Commvault backup products, a second option is Commvault IntelliSnap V.11.21 and later. This or later
versions of Commvault offer Azure NetApp Files Support. The article Commvault IntelliSnap 11.21 provides more
information.
Back up the snapshot using Azure blob storage
Back up to Azure blob storage is a cost effective and fast method to save ANF-based HANA database storage
snapshot backups. To save the snapshots to Azure Blob storage, the azcopy tool is preferred. Download the latest
version of this tool and install it, for example, in the bin directory where the python script from GitHub is installed.
Download the latest azcopy tool:
root # wget -O azcopy_v10.tar.gz https://aka.ms/downloadazcopy-v10-linux && tar -xf azcopy_v10.tar.gz --strip-
components=1
Saving to: ‘azcopy_v10.tar.gz’
The most advanced feature is the SYNC option. If you use the SYNC option, azcopy keeps the source and the
destination directory synchronized. The usage of the parameter --delete-destination is important. Without this
parameter, azcopy is not deleting files at the destination site and the space utilization on the destination side would
grow. Create a Block Blob container in your Azure storage account. Then create the SAS key for the blob container
and synchronize the snapshot folder to the Azure Blob container.
For example, if a daily snapshot should be synchronized to the Azure blob container to protect the data. And only
that one snapshot should be kept, the command below can be used.
Next steps
Read the article:
SAP HANA high availability for Azure virtual machines
SAP HANA high availability for Azure virtual
machines
12/22/2020 • 2 minutes to read • Edit Online
You can use numerous Azure capabilities to deploy mission-critical databases like SAP HANA on Azure VMs. This
article provides guidance on how to achieve availability for SAP HANA instances that are hosted in Azure VMs. The
article describes several scenarios that you can implement by using the Azure infrastructure to increase availability
of SAP HANA in Azure.
Prerequisites
This article assumes that you are familiar with infrastructure as a service (IaaS) basics in Azure, including:
How to deploy virtual machines or virtual networks via the Azure portal or PowerShell.
Using the Azure cross-platform command-line interface (Azure CLI), including the option to use JavaScript
Object Notation (JSON) templates.
This article also assumes that you are familiar with installing SAP HANA instances, and with administrating and
operating SAP HANA instances. It's especially important to be familiar with the setup and operations of HANA
system replication. This includes tasks like backup and restore for SAP HANA databases.
These articles provide a good overview of using SAP HANA in Azure:
Manual installation of single-instance SAP HANA on Azure VMs
Set up SAP HANA system replication in Azure VMs
Back up SAP HANA on Azure VMs
It's also a good idea to be familiar with these articles about SAP HANA:
High availability for SAP HANA
FAQ: High availability for SAP HANA
Perform system replication for SAP HANA
SAP HANA 2.0 SPS 01 What’s new: High availability
Network recommendations for SAP HANA system replication
SAP HANA system replication
SAP HANA service auto-restart
Configure SAP HANA system replication
Beyond being familiar with deploying VMs in Azure, before you define your availability architecture in Azure, we
recommend that you read Manage the availability of Windows virtual machines in Azure.
Next steps
Learn about SAP HANA availability within one Azure region.
Learn about SAP HANA availability across Azure regions.
SAP HANA availability within one Azure region
12/22/2020 • 10 minutes to read • Edit Online
This article describes several availability scenarios within one Azure region. Azure has many regions, spread
throughout the world. For the list of Azure regions, see Azure regions. For deploying SAP HANA on VMs within one
Azure region, Microsoft offers deployment of a single VM with a HANA instance. For increased availability, you can
deploy two VMs with two HANA instances within an Azure availability set that uses HANA system replication for
availability.
Currently, Azure is offering Azure Availability Zones. This article does not describe Availability Zones in detail. But, it
includes a general discussion about using Availability Sets versus Availability Zones.
Azure regions where Availability Zones are offered have multiple datacenters. The datacenters are independent in
the supply of power source, cooling, and network. The reason for offering different zones within a single Azure
region is to deploy applications across two or three Availability Zones that are offered. Deploying across zones,
issues in power and networking affecting only one Azure Availability Zone infrastructure, your application
deployment within an Azure region is still functional. Some reduced capacity might occur. For example, VMs in one
zone might be lost, but VMs in the other two zones would still be up and running.
An Azure Availability Set is a logical grouping capability that helps ensure that the VM resources that you place
within the Availability Set are failure-isolated from each other when they are deployed within an Azure datacenter.
Azure ensures that the VMs you place within an Availability Set run across multiple physical servers, compute racks,
storage units, and network switches. In some Azure documentation, this configuration is referred to as placements
in different update and fault domains. These placements usually are within an Azure datacenter. Assuming that
power source and network issues would affect the datacenter that you are deploying, all your capacity in one Azure
region would be affected.
The placement of datacenters that represent Azure Availability Zones is a compromise between delivering
acceptable network latency between services deployed in different zones, and a distance between datacenters.
Natural catastrophes ideally wouldn't affect the power, network supply, and infrastructure for all Availability Zones
in this region. However, as monumental natural catastrophes have shown, Availability Zones might not always
provide the availability that you want within one region. Think about Hurricane Maria that hit the island of Puerto
Rico on September 20, 2017. The hurricane basically caused a nearly 100 percent blackout on the 90-mile-wide
island.
Single-VM scenario
In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use Azure Premium Storage to
host the operating system disk and all your data disks. The Azure uptime SLA of 99.9 percent and the SLAs of other
Azure components is sufficient for you to fulfill your availability SLAs for your customers. In this scenario, you have
no need to leverage an Azure Availability Set for VMs that run the DBMS layer. In this scenario, you rely on two
different features:
Azure VM auto-restart (also referred to as Azure service healing)
SAP HANA auto-restart
Azure VM auto restart, or service healing, is a functionality in Azure that works on two levels:
The Azure server host checks the health of a VM that's hosted on the server host.
The Azure fabric controller monitors the health and availability of the server host.
A health check functionality monitors the health of every VM that's hosted on an Azure server host. If a VM falls into
a non-healthy state, a reboot of the VM can be initiated by the Azure host agent that checks the health of the VM.
The fabric controller checks the health of the host by checking many different parameters that might indicate issues
with the host hardware. It also checks on the accessibility of the host via the network. An indication of problems
with the host can lead to the following events:
If the host signals a bad health state, a reboot of the host and a restart of the VMs that were running on the host
is triggered.
If the host is not in a healthy state after successful reboot, a redeployment of the VMs that were originally on the
now unhealthy node onto an healthy host server is initiated. In this case, the original host is marked as not
healthy. It won't be used for further deployments until it's cleared or replaced.
If the unhealthy host has problems during the reboot process, an immediate restart of the VMs on an healthy
host is triggered.
With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically
restarted on a healthy Azure host.
IMPORTANT
Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the
commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state. Instead
the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is honoring
that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such occurrences
are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the default behavior
enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds. Common
recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. See also
https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf.
The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM
starts automatically after the VM reboots. You can set up HANA service auto-restart through the watchdog services
of the various HANA services.
You might improve this single-VM scenario by adding a cold failover node to an SAP HANA configuration. In the
SAP HANA documentation, this setup is called host auto-failover. This configuration might make sense in an on-
premises deployment situation where the server hardware is limited, and you dedicate a single-server node as the
host auto-failover node for a set of production hosts. But in Azure, where the underlying infrastructure of Azure
provides a healthy target server for a successful VM restart, it doesn't make sense to deploy SAP HANA host auto-
failover. Because of Azure service healing, there is no reference architecture that foresees a standby node for HANA
host auto-failover.
Special case of SAP HANA scale -out configurations in Azure
High availability for SAP HANA scale-out configurations is relying on service healing of Azure VMs and the restart
of the SAP HANA instance as the VM is up and running again. High availability architectures based on HANA
System Replication are going to be introduced at a later time.
SAP HANA system replication without auto failover and with data preload
In this scenario, data that's replicated to the HANA instance in the second VM is preloaded. This eliminates the two
advantages of not preloading data. In this case, you can't run another SAP HANA system on the second VM. You
also can't use a smaller VM size. Hence, customers rarely implement this scenario.
SAP HANA system replication with automatic failover
In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES
Linux have a failover cluster defined. The SLES Linux cluster is based on the Pacemaker framework, in conjunction
with a STONITH device.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured.
In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous
stream of change records from the primary SAP HANA instance. As transactions are committed by the application
at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the
secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous
replication modes. For details and for a description of differences between these two synchronous replication
modes, see the SAP article Replication modes for SAP HANA system replication.
The overall configuration looks like:
You might choose this solution because it enables you to achieve an RPO=0 and an low RTO. Configure the SAP
HANA client connectivity so that the SAP HANA clients use the virtual IP address to connect to the HANA system
replication configuration. Such a configuration eliminates the need to reconfigure the application if a failover to the
secondary node occurs. In this scenario, the Azure VM SKUs for the primary and secondary VMs must be the same.
Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
For more information about SAP HANA availability across Azure regions, see:
SAP HANA availability across Azure regions
SAP HANA availability across Azure regions
12/22/2020 • 5 minutes to read • Edit Online
This article describes scenarios related to SAP HANA availability across different Azure regions. Because of the
distance between Azure regions, setting up SAP HANA availability in multiple Azure regions involves special
considerations.
NOTE
In this configuration, you can't provide an RPO=0 because your HANA system replication mode is asynchronous. If you need
to provide an RPO=0, this configuration isn't the configuration of choice.
A small change that you can make in the configuration might be to configure data as preloading. However, given
the manual nature of failover and the fact that application layers also need to move to the second region, it might
not make sense to preload data.
If the organization has requirements for high availability readiness in the second(DR) Azure region, then the
architecture would look like:
Using logreplay as operation mode, this configuration provides an RPO=0, with low RTO, within the primary region.
The configuration also provides decent RPO if a move to the second region is involved. The RTO times in the second
region are dependent on whether data is preloaded. Many customers use the VM in the secondary region to run a
test system. In that use case, the data can't be preloaded.
IMPORTANT
The operation modes between the different tiers need to be homogeneous. You can't use logreply as operation mode
between tier 1 and tier 2 and delta_datashipping to supply tier 3. You can only choose the one or the other operation mode
that needs to be consistent for all tiers. Since delta_datashipping is not suitable to give you an RPO=0, the only reasonable
operation mode for such a multi-tier configuration remains logreplay. For details about operation modes and some
restrictions, see the SAP article Operation modes for SAP HANA system replication.
Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
High availability of SAP HANA on Azure VMs on
SUSE Linux Enterprise Server
12/22/2020 • 32 minutes to read • Edit Online
For on-premises development, you can use either HANA System Replication or use shared storage to
establish high availability for SAP HANA. On Azure virtual machines (VMs), HANA System Replication on
Azure is currently the only supported high availability function. SAP HANA Replication consists of one
primary node and at least one secondary node. Changes to the data on the primary node are replicated to the
secondary node synchronously or asynchronously.
This article describes how to deploy and configure the virtual machines, install the cluster framework, and
install and configure SAP HANA System Replication. In the example configurations, installation commands,
instance number 03 , and HANA System ID HN1 are used.
Read the following SAP Notes and papers first:
SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists the prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications.
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications.
SAP Note 2178632 has detailed information about all of the monitoring metrics that are reported for SAP
in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Note 401162 has information on how to avoid "address already in use" when setting up HANA
System Replication.
SAP Community WIKI has all of the required SAP Notes for Linux.
SAP HANA Certified IaaS Platforms
Azure Virtual Machines planning and implementation for SAP on Linux guide.
Azure Virtual Machines deployment for SAP on Linux (this article).
Azure Virtual Machines DBMS deployment for SAP on Linux guide.
SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides
Setting up an SAP HANA SR Performance Optimized Infrastructure (SLES for SAP Applications 12
SP1). The guide contains all of the required information to set up SAP HANA System Replication for
on-premises development. Use this guide as a baseline.
Setting up an SAP HANA SR Cost Optimized Infrastructure (SLES for SAP Applications 12 SP1)
Overview
To achieve high availability, SAP HANA is installed on two virtual machines. The data is replicated by using
HANA System Replication.
SAP HANA System Replication setup uses a dedicated virtual hostname and virtual IP addresses. On Azure, a
load balancer is required to use a virtual IP address. The following list shows the configuration of the load
balancer:
Front-end configuration: IP address 10.0.0.13 for hn1-db
Back-end configuration: Connected to primary network interfaces of all virtual machines that should be
part of HANA System Replication
Probe Port: Port 62503
Load-balancing rules: 30313 TCP, 30315 TCP, 30317 TCP
IMPORTANT
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types you are using. The list of SAP
HANA certified VM types and OS releases for those can be looked up in SAP HANA Certified IaaS Platforms. Make sure
to click into the details of the VM type listed to get the complete list of SAP HANA supported OS releases for the
specific VM type
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure
Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
ls /dev/disk/azure/scsi1/lun*
Example output:
Create physical volumes for all of the disks that you want to use:
Create a volume group for the data files. Use one volume group for the log files and one for the shared
directory of SAP HANA:
Create the logical volumes. A linear volume is created when you use lvcreate without the -i switch.
We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to
the values documented in SAP HANA VM storage configurations. The -i argument should be the
number of the underlying physical volumes and the -I argument is the stripe size. In this document,
two physical volumes are used for the data volume, so the -i switch argument is set to 2 . The stripe
size for the data volume is 256KiB . One physical volume is used for the log volume, so no -i or -I
switches are explicitly used for the log volume commands.
IMPORTANT
Use the -i switch and set it to the number of the underlying physical volume when you use more than one
physical volume for each data, log, or shared volumes. Use the -I switch to specify the stripe size, when
creating a striped volume.
See SAP HANA VM storage configurations for recommended storage configurations, including stripe sizes and
number of disks.
Create the mount directories and copy the UUID of all of the logical volumes:
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/hosts
Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your
environment:
10.0.0.5 hn1-db-0
10.0.0.6 hn1-db-1
To install SAP HANA System Replication, follow chapter 4 of the SAP HANA SR Performance Optimized
Scenario guide.
1. [A] Run the hdblcm program from the HANA DVD. Enter the following values at the prompt:
Choose installation: Enter 1 .
Select additional components for installation: Enter 1 .
Enter Installation Path [/hana/shared]: Select Enter.
Enter Local Host Name [..]: Select Enter.
Do you want to add additional hosts to the system? (y/n) [n]: Select Enter.
Enter SAP HANA System ID: Enter the SID of HANA, for example: HN1 .
Enter Instance Number [00]: Enter the HANA Instance number. Enter 03 if you used the Azure
template or followed the manual deployment section of this article.
Select Database Mode / Enter Index [1]: Select Enter.
Select System Usage / Enter Index [4]: Select the system usage value.
Enter Location of Data Volumes [/hana/data/HN1]: Select Enter.
Enter Location of Log Volumes [/hana/log/HN1]: Select Enter.
Restrict maximum memory allocation? [n]: Select Enter.
Enter Certificate Host Name For Host '...' [...]: Select Enter.
Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password.
Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to
confirm.
Enter System Administrator (hdbadm) Password: Enter the system administrator password.
Confirm System Administrator (hdbadm) Password: Enter the system administrator password again
to confirm.
Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select Enter.
Enter System Administrator Login Shell [/bin/sh]: Select Enter.
Enter System Administrator User ID [1001]: Select Enter.
Enter ID of User Group (sapsys) [79]: Select Enter.
Enter Database User (SYSTEM) Password: Enter the database user password.
Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm.
Restart system after machine reboot? [n]: Select Enter.
Do you want to continue? (y/n): Validate the summary. Enter y to continue.
2. [A] Upgrade the SAP Host Agent.
Download the latest SAP Host Agent archive from the SAP Software Center and run the following
command to upgrade the agent. Replace the path to the archive to point to the file that you
downloaded:
hdbsql -u SYSTEM -p "passwd" -i 03 -d SYSTEMDB 'CREATE DATABASE NW1 SYSTEM USER PASSWORD "passwd"'
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -d SYSTEMDB -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"
su - hdbadm
hdbnsutil -sr_enable –-name=SITE1
# Replace the bold string with your instance number and HANA system ID
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend
using azure-lb resource agent, which is part of package resource-agents, with the following package version
requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-
Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms
are removed from the software, we’ll remove them from this article.
# Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the
Azure load balancer.
# Clean up the HANA resources. The HANA resources might have failed because of a known issue.
sudo crm resource cleanup rsc_SAPHana_HN1_HDB03
Make sure that the cluster status is ok and that all of the resources are started. It's not important on which
node the resources are running.
sudo crm_mon -r
hn1-db-0:~ # SAPHanaSR-showAttr
Global cib-time
--------------------------------
global Mon Aug 13 11:26:04 2018
You can migrate the SAP HANA master node by executing the following command:
If you set AUTOMATED_REGISTER="false" , this sequence of commands should migrate the SAP HANA master
node and the group that contains the virtual IP address to hn1-db-1.
Once the migration is done, the crm_mon -r output looks like this
Online: [ hn1-db-0 hn1-db-1 ]
Failed Actions:
* rsc_SAPHana_HN1_HDB03_start_0 on hn1-db-0 'not running' (7): call=84, status=complete,
exitreason='none',
last-rc-change='Mon Aug 13 11:31:37 2018', queued=0ms, exec=2095ms
The SAP HANA resource on hn1-db-0 fails to start as secondary. In this case, configure the HANA instance as
secondary by executing this command:
su - hn1adm
You also need to clean up the state of the secondary node resource:
Monitor the state of the HANA resource using crm_mon -r. Once HANA is started on hn1-db-0, the output
should look like this
The virtual machine should now restart or stop depending on your cluster configuration. If you set the
stonith-action setting to off, the virtual machine is stopped and the resources are migrated to the running
virtual machine.
After you start the virtual machine again, the SAP HANA resource fails to start as secondary if you set
AUTOMATED_REGISTER="false" . In this case, configure the HANA instance as secondary by executing this
command:
su - hn1adm
Cluster node hn1-db-0 should be rebooted. The Pacemaker service might not get started afterwards. Make
sure to start it again.
Test a manual failover
You can test a manual failover by stopping the pacemaker service on the hn1-db-0 node:
After the failover, you can start the service again. If you set AUTOMATED_REGISTER="false" , the SAP HANA
resource on the hn1-db-0 node fails to start as secondary. In this case, configure the HANA instance as
secondary by executing this command:
service pacemaker start
su - hn1adm
SUSE tests
IMPORTANT
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types you are using. The list of SAP
HANA certified VM types and OS releases for those can be looked up in SAP HANA Certified IaaS Platforms. Make sure
to click into the details of the VM type listed to get the complete list of SAP HANA supported OS releases for the
specific VM type
Run all test cases that are listed in the SAP HANA SR Performance Optimized Scenario or SAP HANA SR Cost
Optimized Scenario guide, depending on your use case. You can find the guides on the SLES for SAP best
practices page.
The following tests are a copy of the test descriptions of the SAP HANA SR Performance Optimized Scenario
SUSE Linux Enterprise Server for SAP Applications 12 SP1 guide. For an up-to-date version, always also read
the guide itself. Always make sure that HANA is in sync before starting the test and also make sure that the
Pacemaker configuration is correct.
In the following test descriptions we assume PREFER_SITE_TAKEOVER="true" and
AUTOMATED_REGISTER="false". NOTE: The following tests are designed to be run in sequence and depend on
the exit state of the preceding tests.
1. TEST 1: STOP PRIMARY DATABASE ON NODE 1
Resource state before starting the test:
Pacemaker should detect the stopped HANA instance and failover to the other node. Once the failover
is done, the HANA instance on node hn1-db-0 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-0 as secondary and cleanup the failed resource.
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --
remoteInstance=03 --replicationMode=sync --name=SITE1
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0
Pacemaker should detect the stopped HANA instance and failover to the other node. Once the failover
is done, the HANA instance on node hn1-db-1 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-1 as secondary and cleanup the failed resource.
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1
Pacemaker should detect the killed HANA instance and failover to the other node. Once the failover is
done, the HANA instance on node hn1-db-0 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-0 as secondary and cleanup the failed resource.
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1
Pacemaker should detect the killed cluster node and fence the node. Once the node is fenced,
Pacemaker will trigger a takeover of the HANA instance. When the fenced node is rebooted, Pacemaker
will not start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for node hn1-db-0, register
node hn1-db-0 as secondary, and cleanup the failed resource.
# run as root
# list the SBD device(s)
hn1-db-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"
# run as <hanasid>adm
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --
remoteInstance=03 --replicationMode=sync --name=SITE1
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0
Pacemaker should detect the killed cluster node and fence the node. Once the node is fenced,
Pacemaker will trigger a takeover of the HANA instance. When the fenced node is rebooted, Pacemaker
will not start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for node hn1-db-1, register
node hn1-db-1 as secondary, and cleanup the failed resource.
# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"
# run as <hanasid>adm
hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-0 --
remoteInstance=03 --replicationMode=sync --name=SITE2
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1
Pacemaker will detect the stopped HANA instance and mark the resource as failed on node hn1-db-1.
Pacemaker should automatically restart the HANA instance. Run the following command to clean up
the failed state.
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1
Pacemaker will detect the killed HANA instance and mark the resource as failed on node hn1-db-1. Run
the following command to clean up the failed state. Pacemaker should then automatically restart the
HANA instance.
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1
9. TEST 9: CRASH SECONDARY SITE NODE (NODE 2) RUNNING SECONDARY HANA DATABASE
Resource state before starting the test:
Pacemaker should detect the killed cluster node and fence the node. When the fenced node is
rebooted, Pacemaker will not start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for node hn1-db-1, and
cleanup the failed resource.
# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability of SAP HANA on Azure VMs on
Red Hat Enterprise Linux
12/22/2020 • 26 minutes to read • Edit Online
For on-premises development, you can use either HANA System Replication or use shared storage to establish
high availability for SAP HANA. On Azure virtual machines (VMs), HANA System Replication on Azure is
currently the only supported high availability function. SAP HANA Replication consists of one primary node and
at least one secondary node. Changes to the data on the primary node are replicated to the secondary node
synchronously or asynchronously.
This article describes how to deploy and configure the virtual machines, install the cluster framework, and install
and configure SAP HANA System Replication. In the example configurations, installation commands, instance
number 03 , and HANA System ID HN1 are used.
Read the following SAP Notes and papers first:
SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure
Overview
To achieve high availability, SAP HANA is installed on two virtual machines. The data is replicated by using HANA
System Replication.
SAP HANA System Replication setup uses a dedicated virtual hostname and virtual IP addresses. On Azure, a
load balancer is required to use a virtual IP address. The following list shows the configuration of the load
balancer:
Front-end configuration: IP address 10.0.0.13 for hn1-db
Back-end configuration: Connected to primary network interfaces of all virtual machines that should be part
of HANA System Replication
Probe Port: Port 62503
Load-balancing rules: 30313 TCP, 30315 TCP, 30317 TCP, 30340 TCP, 30341 TCP, 30342 TCP
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes. See also
SAP note 2382421.
Example output:
Create physical volumes for all of the disks that you want to use:
Create a volume group for the data files. Use one volume group for the log files and one for the shared
directory of SAP HANA:
Create the logical volumes. A linear volume is created when you use lvcreate without the -i switch.
We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to the
values documented in SAP HANA VM storage configurations. The -i argument should be the number of
the underlying physical volumes and the -I argument is the stripe size. In this document, two physical
volumes are used for the data volume, so the -i switch argument is set to 2 . The stripe size for the data
volume is 256KiB . One physical volume is used for the log volume, so no -i or -I switches are
explicitly used for the log volume commands.
IMPORTANT
Use the -i switch and set it to the number of the underlying physical volume when you use more than one
physical volume for each data, log, or shared volumes. Use the -I switch to specify the stripe size, when creating
a striped volume.
See SAP HANA VM storage configurations for recommended storage configurations, including stripe sizes and
number of disks.
Create the mount directories and copy the UUID of all of the logical volumes:
sudo mkdir -p /hana/data/HN1
sudo mkdir -p /hana/log/HN1
sudo mkdir -p /hana/shared/HN1
# Write down the ID of /dev/vg_hana_data_HN1/hana_data, /dev/vg_hana_log_HN1/hana_log, and
/dev/vg_hana_shared_HN1/hana_shared
sudo blkid
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/hosts
Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your
environment:
10.0.0.5 hn1-db-0
10.0.0.6 hn1-db-1
hdbsql -u SYSTEM -p "passwd" -i 03 -d SYSTEMDB 'CREATE DATABASE NW1 SYSTEM USER PASSWORD "passwd"'
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -d SYSTEMDB -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"
su - hdbadm
hdbnsutil -sr_enable –-name=SITE1
HDB stop
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --name=SITE2
HDB start
Create a Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker
cluster for this HANA server.
Next, create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes:
# Replace the bold string with your instance number and HANA system ID
sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1 InstanceNumber=03 \
op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 \
clone clone-max=2 clone-node-max=1 interleave=true
NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from
the software, we’ll remove it from this article.
# Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the
Azure load balancer.
#
sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true
DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 op demote timeout=3600 \
master notify=true clone-max=2 clone-node-max=1 interleave=true
Make sure that the cluster status is ok and that all of the resources are started. It's not important on which node
the resources are running.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For
instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.
You can migrate the SAP HANA master node by executing the following command:
# On RHEL 7.x
[root@hn1-db-0 ~]# pcs resource move SAPHana_HN1_03-master
# On RHEL 8.x
[root@hn1-db-0 ~]# pcs resource move SAPHana_HN1_03-clone --master
If you set AUTOMATED_REGISTER="false" , this command should migrate the SAP HANA master node and the group
that contains the virtual IP address to hn1-db-1.
Once the migration is done, the 'sudo pcs status' output looks like this
The SAP HANA resource on hn1-db-0 is stopped. In this case, configure the HANA instance as secondary by
executing this command:
Monitor the state of the HANA resource using 'pcs status'. Once HANA is started on hn1-db-0, the output should
look like this
Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from
the software, we’ll remove it from this article.
You can test the setup of the Azure fencing agent by disabling the network interface on the node where SAP
HANA is running as Master. See Red Hat Knowledgebase article 79523 for a description on how to simulate a
network failure. In this example we use the net_breaker script to block all access to the network.
The virtual machine should now restart or stop depending on your cluster configuration. If you set the
stonith-action setting to off, the virtual machine is stopped and the resources are migrated to the running
virtual machine.
After you start the virtual machine again, the SAP HANA resource fails to start as secondary if you set
AUTOMATED_REGISTER="false" . In this case, configure the HANA instance as secondary by executing this command:
su - hn1adm
You can test a manual failover by stopping the cluster on the hn1-db-0 node:
After the failover, you can start the cluster again. If you set AUTOMATED_REGISTER="false" , the SAP HANA resource
on the hn1-db-0 node fails to start as secondary. In this case, configure the HANA instance as secondary by
executing this command:
You can test a manual failover by stopping the cluster on the hn1-db-0 node:
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP HANA VM storage configurations
High availability of SAP HANA Scale-up with Azure
NetApp Files on Red Hat Enterprise Linux
12/22/2020 • 24 minutes to read • Edit Online
This article describes how to configure SAP HANA System Replication in Scale-up deployment, when the HANA file
systems are mounted via NFS, using Azure NetApp Files (ANF). In the example configurations and installation
commands, instance number 03 , and HANA System ID HN1 are used. SAP HANA Replication consists of one
primary node and at least one secondary node.
When steps in this document are marked with the following prefixes, the meaning is as follows:
[A] : The step applies to all nodes
[1] : The step applies to node1 only
[2] : The step applies to node2 only
Read the following SAP Notes and papers first:
SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 405827 lists out recommended file system for HANA environment.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in pacemaker cluster.
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration.
High Availability Add-On Reference.
Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems
are on NFS shares
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members.
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure.
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure.
Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are
on NFS shares
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
Traditionally in scale-up environment all file systems for SAP HANA are mounted from local storage. Setting up
High Availability of SAP HANA System Replication on Red Hat Enterprise Linux is published in guide Set up SAP
HANA System Replication on RHEL
In order to achieve SAP HANA High Availability of scale-up system on Azure NetApp Files NFS shares, we need
some additional resource configuration in the cluster, in order for HANA resources to recover, when one node
loses access to the NFS shares on ANF. The cluster manages the NFS mounts, allowing it to monitor the health of
the resources. The dependencies between the file system mounts and the SAP HANA resources are enforced.
SAP HANA filesystems are mounted on NFS shares using Azure NetApp Files on each node. File systems
/hana/data, /hana/log, and /hana/shared are unique to each node.
Mounted on node1 (hanadb1 )
10.32.2.4:/hanadb1 -data-mnt00001 on /hana/data
10.32.2.4:/hanadb1 -log-mnt00001 on /hana/log
10.32.2.4:/hanadb1 -shared-mnt00001 on /hana/shared
Mounted on node2 (hanadb2 )
10.32.2.4:/hanadb2 -data-mnt00001 on /hana/data
10.32.2.4:/hanadb2 -log-mnt00001 on /hana/log
10.32.2.4:/hanadb2 -shared-mnt00001 on /hana/shared
NOTE
File systems /hana/shared, /hana/data and /hana/log are not shared between the two nodes. Each cluster node has its own,
separate file systems.
The SAP HANA System Replication configuration uses a dedicated virtual hostname and virtual IP addresses. On
Azure, a load balancer is required to use a virtual IP address. The following list shows the configuration of the load
balancer:
Front-end configuration: IP address 10.32.0.10 for hn1-db
Back-end configuration: Connected to primary network interfaces of all virtual machines that should be part of
HANA System Replication
Probe Port: Port 62503
Load-balancing rules: 30313 TCP, 30315 TCP, 30317 TCP, 30340 TCP, 30341 TCP, 30342 TCP (if using Basic Azure
Load balancer)
IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines
and the Azure NetApp Files volumes are deployed in proximity.
NOTE
The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP
recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not be
sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.
TIP
You can resize Azure NetApp Files volumes dynamically, without having to unmount the volumes, stop the virtual machines,
or stop SAP HANA. This approach allows flexibility to meet both the expected and unforeseen throughput demands of your
application.
NOTE
All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes. If you deployed the
/hana/shared volumes as NFSv3 volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3.
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
mkdir -p /hana/data
mkdir -p /hana/log
mkdir -p /hana/shared
2. [A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody.
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb1-log-mnt00001 /hana/log
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb1-data-mnt00001 /hana/data
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb2-log-mnt00001 /hana/log
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb2-data-mnt00001 /hana/data
5. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
sudo nfsstat -m
6. [A] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create
the directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
sudo vi /etc/hosts
# Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your
environment
10.32.0.4 hanadb1
10.32.0.5 hanadb2
Cluster configuration
This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on
NFS shares using Azure NetApp Files.
Create a Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster
for this HANA server.
Configure filesystem resources
In this example each cluster node has its own HANA NFS filesystems /hana/shared, /hana/data, and /hana/log.
1. [1] Put the cluster in maintenance mode.
OCF_CHECK_LEVEL=20 attribute is added to the monitor operation so that each monitor performs a read/write
test on the filesystem. Without this attribute, the monitor operation only verifies that the filesystem is
mounted. This can be a problem because when connectivity is lost, the filesystem may remain mounted
despite being inaccessible.
on-fail=fence attribute is also added to the monitor operation. With this option, if the monitor operation
fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all
resources that depend on the failed resource, then restart the failed resource, then start all the resources
that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource
depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop
successfully if the NFS server holding the HANA executables is inaccessible.
4. [1] Configuring Location Constraints
Configure location constraints to ensure that the resources that manage hanadb1 unique mounts can never
run on hanadb2, and vice-versa.
The resource-discovery=never option is set because the unique mounts for each node share the same
mount point. For example, hana_data1 uses mount point /hana/data , and hana_data2 also uses mount
point /hana/data . This can cause a false positive for a probe operation, when resource state is checked at
cluster startup, and this can in turn cause unnecessary recovery behavior. This can be avoided by setting
resource-discovery=never
TIP
If your configuration includes file systems, outside of group hanadb1_nfs or hanadb2_nfs , then include the
sequential=false option, so that there are no ordering dependencies among the file systems. All file systems
must start before hana_nfs1_active , but they do not need to start in any order relative to each other. For more
details see How do I configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA
filesystems are on NFS shares
NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed
from the software, we’ll remove it from this article.
2. Verify the cluster configuration for a failure scenario when a node loses access to the NFS share
(/hana/shared)
The SAP HANA resource agents depend on binaries, stored on /hana/shared to perform operations during
failover. File system /hana/shared is mounted over NFS in the presented scenario.
It is difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be
performed is to re-mount the file system as read-only. This approach validates that the cluster will be able
to failover, if access to /hana/shared is lost on the active node.
Expected Result: On making /hana/shared as read-only file system, the OCF_CHECK_LEVEL attribute of the
resource hana_shared1 which performs read/write operation on file system will fail as it is not able to write
anything on the file system and will perform HANA resource failover. The same result is expected when
your HANA node loses access to the NFS shares.
Resource state before starting the test:
You can place /hana/shared in read-only mode on the active cluster node, using below command:
hanadb1 will either reboot or poweroff based on the action set on stonith (
pcs property show stonith-action ). Once the server (hanadb1) is down, HANA resource move to hanadb2.
You can check the status of cluster from hanadb2.
pcs status
We recommend to thoroughly test the SAP HANA cluster configuration, by also performing the tests
described in Setup SAP HANA System Replication on RHEL.
Verify and troubleshoot SAP HANA scale-out high-
availability setup on SLES 12 SP3
12/22/2020 • 27 minutes to read • Edit Online
This article helps you check the Pacemaker cluster configuration for SAP HANA scale-out that runs on Azure virtual
machines (VMs). The cluster setup was accomplished in combination with SAP HANA System Replication (HSR) and
the SUSE RPM package SAPHanaSR-ScaleOut. All tests were done on SUSE SLES 12 SP3 only. The article's sections
cover different areas and include sample commands and excerpts from config files. We recommend these samples
as a method to verify and check the whole cluster setup.
Important notes
All testing for SAP HANA scale-out in combination with SAP HANA System Replication and Pacemaker was done
with SAP HANA 2.0 only. The operating system version was SUSE Linux Enterprise Server 12 SP3 for SAP
applications. The latest RPM package, SAPHanaSR-ScaleOut from SUSE, was used to set up the Pacemaker cluster.
SUSE published a detailed description of this performance-optimized setup.
For virtual machine types that are supported for SAP HANA scale-out, check the SAP HANA certified IaaS directory.
NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.
There was a technical issue with SAP HANA scale-out in combination with multiple subnets and vNICs and setting
up HSR. It's mandatory to use the latest SAP HANA 2.0 patches where this issue was fixed. The following SAP HANA
versions are supported:
rev2.00.024.04 or higher
rev2.00.032 or higher
If you need support from SUSE, follow this guide. Collect all the information about the SAP HANA high-availability
(HA) cluster as described in the article. SUSE support needs this information for further analysis.
During internal testing, the cluster setup got confused by a normal graceful VM shutdown via the Azure portal. So
we recommend that you test a cluster failover by other methods. Use methods like forcing a kernel panic, or shut
down the networks or migrate the msl resource. See details in the following sections. The assumption is that a
standard shutdown happens with intention. The best example of an intentional shutdown is for maintenance. See
details in Planned maintenance.
Also, during internal testing, the cluster setup got confused after a manual SAP HANA takeover while the cluster
was in maintenance mode. We recommend that you switch it back again manually before you end the cluster
maintenance mode. Another option is to trigger a failover before you put the cluster into maintenance mode. For
more information, see Planned maintenance. The documentation from SUSE describes how you can reset the
cluster in this way by using the crm command. But the approach mentioned previously was robust during internal
testing and never showed any unexpected side effects.
When you use the crm migrate command, make sure to clean up the cluster configuration. It adds location
constraints that you might not be aware of. These constraints impact the cluster behavior. See more details in
Planned maintenance.
Test system description
For SAP HANA scale-out HA verification and certification, a setup was used. It consisted of two systems with three
SAP HANA nodes each: one master and two workers. The following table lists VM names and internal IP addresses.
All the verification samples that follow were done on these VMs. By using these VM names and IP addresses in the
command samples, you can better understand the commands and their outputs:
N O DE T Y P E VM N A M E IP A DDRESS
The following sample output is from the second worker node on site 2. You can see three different internal IP
addresses from eth0, eth1, and eth2:
Next, verify the SAP HANA ports for the name server and HSR. SAP HANA should listen on the corresponding
subnets. Depending on the SAP HANA instance number, you have to adapt the commands. For the test system, the
instance number was 00 . There are different ways to find out which ports are used.
The following SQL statement returns the instance ID, instance number, and other information:
To find the correct port numbers, you can look, for example, in HANA Studio under Configuration or via a SQL
statement:
To find every port that's used in the SAP software stack including SAP HANA, search TCP/IP ports of all SAP
products.
Given the instance number 00 in the SAP HANA 2.0 test system, the port number for the name server is 30001 .
The port number for HSR metadata communication is 40002 . One option is to sign in to a worker node and then
check the master node services. For this article, we checked worker node 2 on site 2 trying to connect to the master
node on site 2.
Check the name server port:
To prove that the internode communication uses subnet 10.0.2.0/24 , the result should look like the following
sample output. Only the connection via subnet 10.0.2.0/24 should succeed:
To prove that the HSR communication uses subnet 10.0.1.0/24 , the result should look like the following sample
output. Only the connection via subnet 10.0.1.0/24 should succeed:
Corosync
The corosync config file has to be correct on every node in the cluster including the majority maker node. If the
cluster join of a node doesn't work as expected, create or copy /etc/corosync/corosync.conf manually onto all
nodes and restart the service.
The content of corosync.conf from the test system is an example.
The first section is totem , as described in Cluster installation, step 11. You can ignore the value for mcastaddr . Just
keep the existing entry. The entries for token and consensus must be set according to Microsoft Azure SAP HANA
documentation.
totem {
version: 2
secauth: on
crypto_hash: sha1
crypto_cipher: aes256
cluster_name: hacluster
clear_node_high_bit: yes
token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20
interface {
ringnumber: 0
bindnetaddr: 10.0.0.0
mcastaddr: 239.170.19.232
mcastport: 5405
ttl: 1
}
transport: udpu
The second section, logging , wasn't changed from the given defaults:
logging {
fileline: off
to_stderr: no
to_logfile: no
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
The third section shows the nodelist . All nodes of the cluster have to show up with their nodeid :
nodelist {
node {
ring0_addr:hso-hana-vm-s1-0
nodeid: 1
}
node {
ring0_addr:hso-hana-vm-s1-1
nodeid: 2
}
node {
ring0_addr:hso-hana-vm-s1-2
nodeid: 3
}
node {
ring0_addr:hso-hana-vm-s2-0
nodeid: 4
}
node {
ring0_addr:hso-hana-vm-s2-1
nodeid: 5
}
node {
ring0_addr:hso-hana-vm-s2-2
nodeid: 6
}
node {
ring0_addr:hso-hana-dm
nodeid: 7
}
}
In the last section, quorum , it's important to set the value for expected_votes correctly. It must be the number of
nodes including the majority maker node. And the value for two_node has to be 0 . Don't remove the entry
completely. Just set the value to 0 .
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 7
two_node: 0
}
Restart the service via systemctl :
SBD device
How to set up an SBD device on an Azure VM is described in SBD fencing.
First, check on the SBD server VM if there are ACL entries for every node in the cluster. Run the following command
on the SBD server VM:
targetcli ls
On the test system, the output of the command looks like the following sample. ACL names like iqn.2006-04.hso-
db-0.local:hso-db-0 must be entered as the corresponding initiator names on the VMs. Every VM needs a
different one.
| | o- sbddbhso ................................................................... [/sbd/sbddbhso (50.0MiB)
write-thru activated]
| | o- alua
................................................................................................... [ALUA
Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA
state: Active/optimized]
| o- pscsi ..................................................................................................
[Storage Objects: 0]
| o- ramdisk ................................................................................................
[Storage Objects: 0]
o- iscsi
............................................................................................................
[Targets: 1]
| o- iqn.2006-04.dbhso.local:dbhso
..................................................................................... [TPGs: 1]
| o- tpg1 ...............................................................................................
[no-gen-acls, no-auth]
| o- acls
..........................................................................................................
[ACLs: 7]
| | o- iqn.2006-04.hso-db-0.local:hso-db-0
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-1.local:hso-db-1
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-2.local:hso-db-2
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-3.local:hso-db-3
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-4.local:hso-db-4
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-5.local:hso-db-5
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-6.local:hso-db-6
.................................................................. [Mapped LUNs: 1]
Then check that the initiator names on all the VMs are different and correspond to the previously shown entries.
This example is from worker node 1 on site 1:
cat /etc/iscsi/initiatorname.iscsi
Next, verify that the discover y works correctly. Run the following command on every cluster node by using the IP
address of the SBD server VM:
10.0.0.19:3260,1 iqn.2006-04.dbhso.local:dbhso
The next proof point is to verify that the node sees the SDB device. Check it on every node including the majority
maker node:
The output should look like the following sample. However, the names might differ. The device name might also
change after the VM reboots:
Depending on the status of the system, it sometimes helps to restart the iSCSI services to resolve issues. Then run
the following commands:
From any node, you can check if all nodes are clear . Make sure that you use the correct device name on a specific
node:
The output should show clear for every node in the cluster:
0 hso-hana-vm-s1-0 clear
1 hso-hana-vm-s2-2 clear
2 hso-hana-vm-s2-1 clear
3 hso-hana-dm clear
4 hso-hana-vm-s1-1 clear
5 hso-hana-vm-s2-0 clear
6 hso-hana-vm-s1-2 clear
Another SBD check is the dump option of the sbd command. In this sample command and output from the
majority maker node, the device name was sdd , not sdm :
The output, apart from the device name, should look the same on all nodes:
One more check for SBD is the possibility to send a message to another node. To send a message to worker node 2
on site 2, run the following command on worker node 1 on site 2:
On the target VM side, hso-hana-vm-s2-2 in this example, you can find the following entry in
/var/log/messages :
Check that the entries in /etc/sysconfig/sbd correspond to the description in Setting up Pacemaker on SUSE
Linux Enterprise Server in Azure. Verify that the startup setting in /etc/iscsi/iscsid.conf is set to automatic.
The following entries are important in /etc/sysconfig/sbd . Adapt the id value if necessary:
SBD_DEVICE="/dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68;"
SBD_PACEMAKER=yes
SBD_STARTMODE=always
SBD_WATCHDOG=yes
Check the startup setting in /etc/iscsi/iscsid.conf . The required setting should have happened with the following
iscsiadm command, described in the documentation. Verify and adapt it manually with vi if it's different.
This command sets startup behavior:
node.startup = automatic
During testing and verification, after the restart of a VM, the SBD device wasn't visible anymore in some cases.
There was a discrepancy between the startup setting and what YaST2 showed. To check the settings, take these
steps:
1. Start YaST2.
2. Select Network Ser vices on the left side.
3. Scroll down on the right side to iSCSI Initiator and select it.
4. On the next screen under the Ser vice tab, you see the unique initiator name for the node.
5. Above the initiator name, make sure that the Ser vice Star t value is set to When Booting .
6. If it's not, then set it to When Booting instead of Manually .
7. Next, switch the top tab to Connected Targets .
8. On the Connected Targets screen, you should see an entry for the SBD device like this sample:
10.0.0.19:3260 iqn.2006-04.dbhso.local:dbhso .
9. Check if the Star t-Up value is set to on boot .
10. If not, choose Edit and change it.
11. Save the changes and exit YaST2.
Pacemaker
After everything is set up correctly, you can run the following command on every node to check the status of the
Pacemaker service:
The top of the output should look like the following sample. It's important that the status after Active is shown as
loaded and active (running) . The status after Loaded must be shown as enabled .
pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-09-07 05:56:27 UTC; 4 days ago
Docs: man:pacemakerd
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/index.html
Main PID: 4496 (pacemakerd)
Tasks: 7 (limit: 4915)
CGroup: /system.slice/pacemaker.service
├─4496 /usr/sbin/pacemakerd -f
├─4499 /usr/lib/pacemaker/cib
├─4500 /usr/lib/pacemaker/stonithd
├─4501 /usr/lib/pacemaker/lrmd
├─4502 /usr/lib/pacemaker/attrd
├─4503 /usr/lib/pacemaker/pengine
└─4504 /usr/lib/pacemaker/crmd
crm status
The output should look like the following sample. It's fine that the cln and msl resources are shown as stopped on
the majority maker VM, hso-hana-dm . There's no SAP HANA installation on the majority maker node. So the cln
and msl resources are shown as stopped. It's important that it shows the correct total number of VMs, 7 . All VMs
that are part of the cluster must be listed with the status Online . The current primary master node must be
recognized correctly. In this example, it's hso-hana-vm-s1-0 :
Stack: corosync
Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Tue Sep 11 15:56:40 2018
Last change: Tue Sep 11 15:56:23 2018 by root via crm_attribute on hso-hana-vm-s1-0
7 nodes configured
17 resources configured
When you check with crm status , you notice in the output that all resources are marked as unmanaged . In this
state, the cluster doesn't react on any changes like starting or stopping SAP HANA. The following sample shows the
output of the crm status command while the cluster is in maintenance mode:
Stack: corosync
Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Wed Sep 12 07:48:10 2018
Last change: Wed Sep 12 07:46:54 2018 by root via cibadmin on hso-hana-vm-s2-1
7 nodes configured
17 resources configured
This command sample shows how to end the cluster maintenance mode:
Another crm command gets the complete cluster configuration into an editor, so you can edit it. After it saves the
changes, the cluster starts appropriate actions:
After failures of cluster resources, the crm status command shows a list of Failed Actions . See the following
sample of this output:
Stack: corosync
Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Thu Sep 13 07:30:44 2018
Last change: Thu Sep 13 07:30:20 2018 by root via crm_attribute on hso-hana-vm-s1-0
7 nodes configured
17 resources configured
Failed Actions:
* rsc_SAPHanaCon_HSO_HDB00_monitor_60000 on hso-hana-vm-s2-0 'unknown error' (1): call=86, status=complete,
exitreason='none',
last-rc-change='Wed Sep 12 17:01:28 2018', queued=0ms, exec=277663ms
It's necessary to do a cluster cleanup after failures. Use the crm command again, and use the command option
cleanup to get rid of these failed action entries. Name the corresponding cluster resource as follows:
Failover or takeover
As discussed in Important notes, you shouldn't use a standard graceful shutdown to test the cluster failover or SAP
HANA HSR takeover. Instead, we recommend that you trigger a kernel panic, force a resource migration, or possibly
shut down all networks on the OS level of a VM. Another method is the crm <node> standby command. See the
SUSE document.
The following three sample commands can force a cluster failover:
As described in Planned maintenance, a good way to monitor the cluster activities is to run SAPHanaSR-
showAttr with the watch command:
watch SAPHanaSR-showAttr
It also helps to look at the SAP HANA landscape status coming from an SAP Python script. The cluster setup is
looking for this status value. It becomes clear when you think about a worker node failure. If a worker node goes
down, SAP HANA doesn't immediately return an error for the health of the whole scale-out system.
There are some retries to avoid unnecessary failovers. The cluster reacts only if the status changes from Ok , return
value 4 , to error , return value 1 . So it's correct if the output from SAPHanaSR-showAttr shows a VM with the
state offline . But there's no activity yet to switch primary and secondary. No cluster activity gets triggered as long
as SAP HANA doesn't return an error.
You can monitor the SAP HANA landscape health status as user <HANA SID>adm by calling the SAP Python
script as follows. You might have to adapt the path:
watch python
/hana/shared/HSO/exe/linuxx86_64/HDB_2.00.032.00.1533114046_eeaf4723ec52ed3935ae0dc9769c9411ed73fec5/python_sup
port/landscapeHostConfiguration.py
The output of this command should look like the following sample. The Host Status column and the overall host
status are both important. The actual output is wider, with additional columns. To make the output table more
readable within this document, most columns on the right side were stripped:
There's another command to check current cluster activities. See the following command and the output tail after
the master node of the primary site was killed. You can see the list of transition actions like promoting the former
secondary master node, hso-hana-vm-s2-0 , as the new primary master. If everything is fine, and all activities are
finished, this Transition Summar y list has to be empty.
crm_simulate -Ls
...........
Transition Summary:
* Fence hso-hana-vm-s1-0
* Stop rsc_SAPHanaTop_HSO_HDB00:1 (hso-hana-vm-s1-0)
* Demote rsc_SAPHanaCon_HSO_HDB00:1 (Master -> Stopped hso-hana-vm-s1-0)
* Promote rsc_SAPHanaCon_HSO_HDB00:5 (Slave -> Master hso-hana-vm-s2-0)
* Move rsc_ip_HSO_HDB00 (Started hso-hana-vm-s1-0 -> hso-hana-vm-s2-0)
* Move rsc_nc_HSO_HDB00 (Started hso-hana-vm-s1-0 -> hso-hana-vm-s2-0)
Planned maintenance
There are different use cases when it comes to planned maintenance. One question is whether it's just
infrastructure maintenance like changes on the OS level and disk configuration or a HANA upgrade. You can find
additional information in documents from SUSE like Towards Zero Downtime or SAP HANA SR Performance
Optimized Scenario. These documents also include samples that show how to manually migrate a primary.
Intense internal testing was done to verify the infrastructure maintenance use case. To avoid any issues related to
migrating the primary, we decided to always migrate a primary before putting a cluster into maintenance mode.
This way, it's not necessary to make the cluster forget about the former situation: which side was primary and
which was secondary.
There are two different situations in this regard:
Planned maintenance on the current secondar y . In this case, you can just put the cluster into
maintenance mode and do the work on the secondary without affecting the cluster.
Planned maintenance on the current primar y . So that users can continue to work during maintenance,
you need to force a failover. With this approach, you must trigger the cluster failover by Pacemaker and not
just on the SAP HANA HSR level. The Pacemaker setup automatically triggers the SAP HANA takeover. You
also need to accomplish the failover before you put the cluster into maintenance mode.
The procedure for maintenance on the current secondary site is as follows:
1. Put the cluster into maintenance mode.
2. Accomplish the work on the secondary site.
3. End the cluster maintenance mode.
The procedure for maintenance on the current primary site is more complex:
1. Manually trigger a failover or SAP HANA takeover via a Pacemaker resource migration. See details that follow.
2. SAP HANA on the former primary site gets shut down by the cluster setup.
3. Put the cluster into maintenance mode.
4. After the maintenance work is done, register the former primary as the new secondary site.
5. Clean up the cluster configuration. See details that follow.
6. End the cluster maintenance mode.
Migrating a resource adds an entry to the cluster configuration. An example is forcing a failover. You have to clean
up these entries before you end maintenance mode. See the following sample.
First, force a cluster failover by migrating the msl resource to the current secondary master node. This command
gives a warning that a move constraint was created:
Check the failover process via the command SAPHanaSR-showAttr . To monitor the cluster status, open a
dedicated shell window and start the command with watch :
watch SAPHanaSR-showAttr
The output should show the manual failover. The former secondary master node got promoted , in this sample,
hso-hana-vm-s2-0 . The former primary site was stopped, lss value 1 for former primary master node hso-
hana-vm-s1-0 :
After the cluster failover and SAP HANA takeover, put the cluster into maintenance mode as described in
Pacemaker.
The commands SAPHanaSR-showAttr and crm status don't indicate anything about the constraints created by
the resource migration. One option to make these constraints visible is to show the complete cluster resource
configuration with the following command:
Within the cluster configuration, you find a new location constraint caused by the former manual resource
migration. This example entry starts with location cli- :
At the end of the maintenance work, you stop the cluster maintenance mode as shown in Pacemaker.
The command tells you where it put the compressed log files:
You can then extract the individual files via the standard tar command:
When you look at the extracted files, you find all the log files. Most of them were put in separate directories for
every node in the cluster:
Within the time range that was specified, the current master node hso-hana-vm-s1-0 was killed. You can find
entries related to this event in the journal.log :
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 su[93494]: (to hsoadm) root on none
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 su[93494]: pam_unix(su-l:session): session opened for user hsoadm by
(uid=0)
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 systemd[1]: Started Session c44290 of user hsoadm.
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [TOTEM ] A new membership (10.0.0.13:120996) was
formed. Members left: 1
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [TOTEM ] Failed to receive the leave message.
failed: 1
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 attrd[28313]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 attrd[28313]: notice: Removing all hso-hana-vm-s1-0 attributes for
peer loss
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 attrd[28313]: notice: Purged 1 peer with id=1 and/or uname=hso-
hana-vm-s1-0 from the membership cache
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 stonith-ng[28311]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 stonith-ng[28311]: notice: Purged 1 peer with id=1 and/or
uname=hso-hana-vm-s1-0 from the membership cache
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 cib[28310]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [QUORUM] Members[6]: 7 2 3 4 5 6
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [MAIN ] Completed service synchronization, ready
to provide service.
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 crmd[28315]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 pacemakerd[28308]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 cib[28310]: notice: Purged 1 peer with id=1 and/or uname=hso-hana-
vm-s1-0 from the membership cache
2018-09-13T07:38:03+0000 hso-hana-vm-s2-1 su[93494]: pam_unix(su-l:session): session closed for user hsoadm
Another example is the Pacemaker log file on the secondary master, which became the new primary master. This
excerpt shows that the status of the killed primary master node was set to offline :
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 3 still member of
group stonith-ng (peer=hso-hana-vm-s1-2, counter=5.1)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 4 still member of
group stonith-ng (peer=hso-hana-vm-s2-0, counter=5.2)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 5 still member of
group stonith-ng (peer=hso-hana-vm-s2-1, counter=5.3)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 6 still member of
group stonith-ng (peer=hso-hana-vm-s2-2, counter=5.4)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 7 still member of
group stonith-ng (peer=hso-hana-dm, counter=5.5)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membership: Node 1 left group crmd
(peer=hso-hana-vm-s1-0, counter=5.0)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: crm_update_peer_proc: pcmk_cpg_membership:
Node hso-hana-vm-s1-0[1] - corosync-cpg is now offline
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: peer_update_callback: Client hso-hana-vm-s1-
0/peer now has status [offline] (DC=hso-hana-dm, changed=4000000)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membership: Node 2 still member of
group crmd (peer=hso-hana-vm-s1-1, counter=5.0)
[communication]
tcp_keepalive_interval = 20
internal_network = 10.0.2/24
listeninterface = .internal
[internal_hostname_resolution]
10.0.2.40 = hso-hana-vm-s2-0
10.0.2.42 = hso-hana-vm-s2-2
10.0.2.41 = hso-hana-vm-s2-1
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[system_replication_communication]
listeninterface = .internal
[system_replication_hostname_resolution]
10.0.1.30 = hso-hana-vm-s1-0
10.0.1.31 = hso-hana-vm-s1-1
10.0.1.32 = hso-hana-vm-s1-2
10.0.1.40 = hso-hana-vm-s2-0
10.0.1.41 = hso-hana-vm-s2-1
10.0.1.42 = hso-hana-vm-s2-2
Hawk
The cluster solution provides a browser interface that offers a GUI for users who prefer menus and graphics to
having all the commands on the shell level. To use the browser interface, replace <node> with an actual SAP
HANA node in the following URL. Then enter the credentials of the cluster (user cluster ):
https://<node>:7630
You can also upload the hb_repor t output in Hawk under Histor y , shown as follows. See hb_report to collect log
files:
With the Histor y Explorer , you can then go through all the cluster transitions included in the hb_repor t output:
This final screenshot shows the Details section of a single transition. The cluster reacted on a primary master node
crash, node hso-hana-vm-s1-0 . It's now promoting the secondary node as the new master, hso-hana-vm-s2-0 :
Next steps
This troubleshooting guide describes high availability for SAP HANA in a scale-out configuration. In addition to the
database, another important component in an SAP landscape is the SAP NetWeaver stack. Learn about high
availability for SAP NetWeaver on Azure virtual machines that use SUSE Enterprise Linux Server.
High availability of SAP HANA scale-out system on
Red Hat Enterprise Linux
12/22/2020 • 37 minutes to read • Edit Online
This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with HANA
system replication (HSR) and Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file
systems in the presented architecture are provided by Azure NetApp Files and are mounted over NFS.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system
ID is HN1 . The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6.
Before you begin, refer to the following SAP notes and papers:
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP
SAP Note 1900823: Contains information about SAP HANA storage requirements
SAP Community Wiki: Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA Network Requirements
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Red Hat Enterprise Linux Networking Guide
How do I configure SAP HANA Scale-Out System Replication in a Pacemaker cluster with HANA file
systems on NFS shares
Azure-specific RHEL documentation:
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure
Red Hat Enterprise Linux Solution for SAP HANA Scale-Out and System Replication
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Azure NetApp Files documentation
Overview
One method to achieve HANA high availability for HANA scale-out installations, is to configure HANA system
replication and protect the solution with Pacemaker cluster to allow automatic failover. When an active node fails,
the cluster fails over the HANA resources to the other site.
The presented configuration shows three HANA nodes on each site, plus majority maker node to prevent split-
brain scenario. The instructions can be adapted, to include more VMs as HANA DB nodes.
The HANA shared file system /hana/shared in the presented architecture is provided by Azure NetApp Files. It is
mounted via NFSv4.1 on each HANA node in the same HANA system replication site. File systems /hana/data and
/hana/log are local file systems and are not shared between the HANA DB nodes. SAP HANA will be installed in
non-shared mode.
TIP
For recommended SAP HANA storage configurations, see SAP HANA Azure VMs storage configurations.
In the preceding diagram, three subnets are represented within one Azure virtual network, following the SAP
HANA network recommendations:
for client communication - client 10.23.0.0/24
for internal HANA inter-node communication - inter 10.23.1.128/26
for HANA system replication - hsr 10.23.1.192/26
As /hana/data and /hana/log are deployed on local disks, it is not necessary to deploy separate subnet and
separate virtual network cards for communication to the storage.
The Azure NetApp volumes are deployed in a separate subnet, [delegated to Azure NetApp Files]
(https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-delegate-subnet: anf 10.23.1.0/26.
Deploy local managed disks for /hana/data and /hana/log . The minimum recommended storage
configuration for /hana/data and /hana/log is described in SAP HANA Azure VMs storage configurations.
Deploy the primary network interface for each VM in the client virtual network subnet.
When the VM is deployed via Azure portal, the network interface name is automatically generated. In these
instructions for simplicity we'll refer to the automatically generated, primary network interfaces, which are
attached to the client Azure virtual network subnet as hana-s1-db1-client , hana-s1-db2-client ,
hana-s1-db3-client , and so on.
IMPORTANT
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of
SAP HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site.
Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
2. Create six network interfaces, one for each HANA DB virtual machine, in the inter virtual network subnet
(in this example, hana-s1-db1-inter , hana-s1-db2-inter , hana-s1-db3-inter , hana-s2-db1-inter ,
hana-s2-db2-inter , and hana-s2-db3-inter ).
3. Create six network interfaces, one for each HANA DB virtual machine, in the hsr virtual network subnet (in
this example, hana-s1-db1-hsr , hana-s1-db2-hsr , hana-s1-db3-hsr , hana-s2-db1-hsr , hana-s2-
db2-hsr , and hana-s2-db3-hsr ).
4. Attach the newly created virtual network interfaces to the corresponding virtual machines:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hana-s1-
db1 ), and then select the virtual machine.
c. In the Over view pane, select Stop to deallocate the virtual machine.
d. Select Networking , and then attach the network interface. In the Attach network interface drop-down
list, select the already created network interfaces for the inter and hsr subnets.
e. Select Save .
f. Repeat steps b through e for the remaining virtual machines (in our example, hana-s1-db2 , hana-s1-
db3 , hana-s2-db1 , hana-s2-db2 and hana-s2-db3 ).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all
newly attached network interfaces.
5. Enable accelerated networking for the additional network interfaces for the inter and hsr subnets by
doing the following steps:
a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces,
which are attached to the inter and hsr subnets.
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure
Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard
Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to
allow routing to public end points. For details on how to achieve outbound connectivity see Public endpoint
connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will
cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health
probes. See also SAP note 2382421.
# Client subnet
10.23.0.11 hana-s1-db1
10.23.0.12 hana-s1-db1
10.23.0.13 hana-s1-db2
10.23.0.14 hana-s2-db1
10.23.0.15 hana-s2-db2
10.23.0.16 hana-s2-db3
10.23.0.17 hana-s-mm
# Internode subnet
10.23.1.138 hana-s1-db1-inter
10.23.1.139 hana-s1-db2-inter
10.23.1.140 hana-s1-db3-inter
10.23.1.141 hana-s2-db1-inter
10.23.1.142 hana-s2-db2-inter
10.23.1.143 hana-s2-db3-inter
# HSR subnet
10.23.1.202 hana-s1-db1-hsr
10.23.1.203 hana-s1-db2-hsr
10.23.1.204 hana-s1-db3-hsr
10.23.1.205 hana-s2-db1-hsr
10.23.1.206 hana-s2-db2-hsr
10.23.1.207 hana-s2-db3-hsr
mkdir -p /hana/shared
2. [AH] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, that is, defaultv4iddomain.com and the mapping is set to nobody .
This step is only needed, if using Azure NetAppFiles NFSv4.1.
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .
3. [AH] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.
This step is only needed, if using Azure NetAppFiles NFSv4.1.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.23.1.7:/HN1-shared-s1 /hana/shared
5. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.23.1.7:/HN1-shared-s2 /hana/shared
6. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all HANA DB VMs with NFS
protocol version NFSv4 .
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.14,local_lock=none,addr=10.23.1.7
ls /dev/disk/azure/scsi1/lun*
Example output:
2. [AH] Create physical volumes for all of the disks that you want to use:
3. [AH] Create a volume group for the data files. Use one volume group for the log files and one for the
shared directory of SAP HANA:
4. [AH] Create the logical volumes. A linear volume is created when you use lvcreate without the -i switch.
We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to the
values documented in SAP HANA VM storage configurations. The -i argument should be the number of
the underlying physical volumes and the -I argument is the stripe size. In this document, two physical
volumes are used for the data volume, so the -i switch argument is set to 2 . The stripe size for the data
volume is 256 KiB . One physical volume is used for the log volume, so no -i or -I switches are explicitly
used for the log volume commands.
IMPORTANT
Use the -i switch and set it to the number of the underlying physical volume when you use more than one
physical volume for each data or log volumes. Use the -I switch to specify the stripe size, when creating a striped
volume.
See SAP HANA VM storage configurations for recommended storage configurations, including stripe sizes and
number of disks.
5. [AH] Create the mount directories and copy the UUID of all of the logical volumes:
6. [AH] Create fstab entries for the logical volumes and mount:
sudo vi /etc/fstab
sudo mount -a
Installation
In this example for deploying SAP HANA in scale-out configuration with HSR on Azure VMs, we've used HANA 2.0
SP4.
Prepare for HANA installation
1. [AH] Before the HANA installation, set the root password. You can disable the root password after the
installation has been completed. Execute as root command passwd .
2. [1,2] Change the permissions on /hana/shared
3. [1] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s1-db2 and hana-s1-db3 ,
without being prompted for a password.
If that is not the case, exchange ssh keys, as documented in Using Key-based Authentication.
ssh root@hana-s1-db2
ssh root@hana-s1-db3
4. [2] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s2-db2 and hana-s2-db3 ,
without being prompted for a password.
If that is not the case, exchange ssh keys, as documented in Using Key-based Authentication.
ssh root@hana-s2-db2
ssh root@hana-s2-db3
5. [AH] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note
2593824 for RHEL 7.
# If using RHEL 7
yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1
# If using RHEL 8
yum install libatomic libtool-ltdl.x86_64
6. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-enable
it, after the HANA installation is done.
# Execute as root
systemctl stop firewalld
systemctl disable firewalld
./hdblcm --internal_network=10.23.1.128/26
4. [1,2] Prepare global.ini for installation in non-shared environment, as described in SAP note 2080991.
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
[persistence]
basepath_shared = no
6. [1,2] Verify that the client interface will be using the IP addresses from the client subnet for
communication.
# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from
SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result - example from SITE 2
"hana-s2-db1","net_publicname","10.23.0.14"
For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP HANA
internal network.
7. [AH] Change permissions on the data and log directories to avoid HANA installation error.
sudo chmod o+w -R /hana/data /hana/log
8. [1] Install the secondary HANA nodes. The example instructions in this step are for SITE 1.
a. Start the resident hdblcm program as root .
cd /hana/shared/HN1/hdblcm
./hdblcm
4. [1,2] Change the HANA configuration so that communication for HANA system replication if directed
though the HANA system replication virtual network interfaces.
Stop HANA on both sites
Edit global.ini to add the host mapping for HANA system replication: use the IP addresses from the hsr
subnet.
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[system_replication_hostname_resolution]
10.23.1.202 = hana-s1-db1
10.23.1.203 = hana-s1-db2
10.23.1.204 = hana-s1-db3
10.23.1.205 = hana-s2-db1
10.23.1.206 = hana-s2-db2
10.23.1.207 = hana-s2-db3
For more information, see Host Name resolution for System Replication.
5. [AH] Re-enable the firewall.
Re-enable the firewall
# Execute as root
systemctl start firewalld
systemctl enable firewalld
Open the necessary firewall ports. You will need to adjust the ports for your HANA instance number.
IMPORTANT
Create firewall rules to allow HANA inter node communication and client traffic. The required ports are listed
on TCP/IP Ports of All SAP Products. The following commands are just an example. In this scenario with used
system number 03.
# Execute as root
sudo firewall-cmd --zone=public --add-port=30301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30301/tcp
sudo firewall-cmd --zone=public --add-port=30303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30303/tcp
sudo firewall-cmd --zone=public --add-port=30306/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30306/tcp
sudo firewall-cmd --zone=public --add-port=30307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30307/tcp
sudo firewall-cmd --zone=public --add-port=30313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30313/tcp
sudo firewall-cmd --zone=public --add-port=30315/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30315/tcp
sudo firewall-cmd --zone=public --add-port=30317/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30317/tcp
sudo firewall-cmd --zone=public --add-port=30340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30340/tcp
sudo firewall-cmd --zone=public --add-port=30341/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30341/tcp
sudo firewall-cmd --zone=public --add-port=30342/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30342/tcp
sudo firewall-cmd --zone=public --add-port=1128/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1128/tcp
sudo firewall-cmd --zone=public --add-port=1129/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1129/tcp
sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40302/tcp
sudo firewall-cmd --zone=public --add-port=40301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40301/tcp
sudo firewall-cmd --zone=public --add-port=40307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40307/tcp
sudo firewall-cmd --zone=public --add-port=40303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40303/tcp
sudo firewall-cmd --zone=public --add-port=40340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40340/tcp
sudo firewall-cmd --zone=public --add-port=50313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50313/tcp
sudo firewall-cmd --zone=public --add-port=50314/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50314/tcp
sudo firewall-cmd --zone=public --add-port=30310/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30310/tcp
sudo firewall-cmd --zone=public --add-port=30302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30302/tcp
IMPORTANT
Don't set quorum expected-votes to 2, as this is not a two node cluster.
Make sure that cluster property concurrent-fencing is enabled, so that node fencing is deserialized.
umount /hana/shared
3. [1] Create the file system cluster resources for /hana/shared in disabled state. The resources are created
with the option --disabled , because you have to define the location constraints, before the mounts are
enabled.
# clone the /hana/shared file system resources for both site1 and site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
OCF_CHECK_LEVEL=20 attribute is added to the monitor operation, so that monitor operations perform a
read/write test on the file system. Without this attribute, the monitor operation only verifies that the file
system is mounted. This can be a problem because when connectivity is lost, the file system may remain
mounted, despite being inaccessible.
on-fail=fence attribute is also added to the monitor operation. With this option, if the monitor operation
fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all
resources that depend on the failed resource, then restart the failed resource, then start all the resources
that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource
depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop
successfully, if the NFS share holding the HANA binaries is inaccessible.
4. [1] Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned
attribute S1 , and all SAP HANA DB nodes on replication site 2 are assigned attribute S2 .
5. [1] Configure the constraints, that determine where the NFS file systems will be mounted and enable the
file system resources.
# Configure the constraints
pcs constraint location fs_hana_shared_s1-clone rule resource-discovery=never score=-INFINITY
NFS_SID_SITE ne S1
pcs constraint location fs_hana_shared_s2-clone rule resource-discovery=never score=-INFINITY
NFS_SID_SITE ne S2
# Enable the file system resources
pcs resource enable fs_hana_shared_s1
pcs resource enable fs_hana_shared_s2
When you enable the file system resources, the cluster will mount the /hana/shared file systems.
6. [AH] Verify that the ANF volumes are mounted under /hana/shared on all HANA DB VMs on both sites.
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.14,local_lock=none,addr=10.23.1.7
7. [1] Configure the attribute resources. Configure the constraints, that will set the attributes to true , if the
NFS mounts for hana/shared are mounted.
TIP
If your configuration includes other file systems, besides / hana/shared , which are NFS mounted, then include
sequential=false option, so that there are no ordering dependencies among the file systems. All NFS mounted file
systems must start, before the corresponding attribute resource, but they do not need to start in any order relative
to each other. For more information see How do I configure SAP HANA Scale-Out HSR in a pacemaker cluster when
the HANA file systems are NFS shares.
8. [1] Place pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources.
NOTE
Consult Support Policies for RHEL HA clusters - Management of SAP HANA in a cluster for the minimum supported
version of package resource-agents-sap-hana-scaleout for your OS release.
2. [1,2] Install the HANA "system replication hook". The hook needs to be installed on one HANA DB node on
each system replication site. SAP HANA should be still down.
a. Prepare the hook as root
mkdir -p /hana/shared/myHooks
cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
chown -R hn1adm:sapsys /hana/shared/myHooks
b. Adjust global.ini
# add to global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = info
3. [AH] The cluster requires sudoers configuration on the cluster node for <sid>adm. In this example that is
achieved by creating a new file. Execute the commands as root .
5. [1] Verify the hook installation. Execute as <sid>adm on the active HANA system replication site.
cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
# Example entries
# 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.092016 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK
6. [1] Create the HANA cluster resources. Execute the following commands as root .
a. Make sure the cluster is already maintenance mode.
b. Next, create the HANA Topology resource.
If building RHEL 7.x cluster, use the following commands:
NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is
removed from the software, we’ll remove it from this article.
7. [1] Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the
resources are started.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup.
For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.
#mode: PRIMARY
#site id: 1
#site name: HANA_S1
2. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (
/hana/shared ).
The SAP HANA resource agents depend on binaries, stored on /hana/shared to perform operations during
failover. File system /hana/shared is mounted over NFS in the presented configuration. A test that can be
performed, is to remount the /hana/shared file system as Read only. This approach validates that the cluster
will fail over, if access to /hana/shared is lost on the active system replication site.
Expected result : When you remount /hana/shared as Read only, the monitoring operation that performs
read/write operation on file system, will fail, as it is not able to write to the file system and will trigger HANA
resource failover. The same result is expected when your HANA node loses access to the NFS share.
You can check the state of the cluster resources by executing crm_mon or pcs status . Resource state before
starting the test:
# Output of crm_mon
#7 nodes configured
#45 resources configured
To simulate failure for /hana/shared on one of the primary replication site VMs, execute the following
command:
# Execute as root
mount -o ro /hana/shared
# Or if the above command returns an error
sudo mount -o ro 10.23.1.7/HN1-shared-s1 /hana/shared
The HANA VM, that lost access to to /hana/shared should restart or stop, depending on the cluster
configuration. The cluster resources are migrated to the other HANA system replication site.
If the cluster has not started on the VM, that was restarted, start the cluster by executing:
When the cluster starts, file system /hana/shared will be automatically mounted.
If you set AUTOMATED_REGISTER="false", you will need to configure SAP HANA system replication on
secondary site. In this case, you can execute these commands to reconfigure SAP HANA as secondary.
# Execute on the secondary
su - hn1adm
# Make sure HANA is not running on the secondary site. If it is started, stop HANA
sapcontrol -nr 03 -function StopWait 600 10
# Register the HANA secondary site
hdbnsutil -sr_register --name=HANA_S1 --remoteHost=hana-s2-db1 --remoteInstance=03 --
replicationMode=sync
# Switch back to root and cleanup failed resources
pcs resource cleanup SAPHana_HN1_HDB03
# Output of crm_mon
#7 nodes configured
#45 resources configured
#Active resources:
We recommend to thoroughly test the SAP HANA cluster configuration, by also performing the tests, documented
in HA for SAP HANA on Azure VMs on RHEL.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs).
Deploy a SAP HANA scale-out system with standby
node on Azure VMs by using Azure NetApp Files on
SUSE Linux Enterprise Server
12/22/2020 • 33 minutes to read • Edit Online
This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with
standby on Azure virtual machines (VMs) by using Azure NetApp Files for the shared storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system
ID is HN1 . The examples are based on HANA 2.0 SP4 and SUSE Linux Enterprise Server for SAP 12 SP4.
Before you begin, refer to the following SAP notes and papers:
Azure NetApp Files documentation
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
SAP Note 2205917: Contains recommended OS settings for SUSE Linux Enterprise Server for SAP
Applications
SAP Note 1944799: Contains SAP Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
SAP Note 1984787: Contains general information about SUSE Linux Enterprise Server 12
SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP
SAP Note 1900823: Contains information about SAP HANA storage requirements
SAP Community Wiki: Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides: Contains all required information to set up NetWeaver High Availability
and SAP HANA System Replication on-premises (to be used as a general baseline; they provide much more
detailed information)
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
One method for achieving HANA high availability is by configuring host auto failover. To configure host auto
failover, you add one or more virtual machines to the HANA system and configure them as standby nodes. When
active node fails, a standby node automatically takes over. In the presented configuration with Azure virtual
machines, you achieve auto failover by using NFS on Azure NetApp Files.
NOTE
The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The
improved file lease-based locking mechanism in the NFSv4 protocol is used for I/O fencing.
IMPORTANT
To build the supported configuration, you must deploy the HANA data and log volumes as NFSv4.1 volumes and mount
them by using the NFSv4.1 protocol. The HANA host auto-failover configuration with standby node is not supported with
NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented
within one Azure virtual network:
For client communication
For communication with the storage system
For internal HANA inter-node communication
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
For this example configuration, the subnets are:
client 10.23.0.0/24
storage 10.23.2.0/24
hana 10.23.3.0/24
anf 10.23.1.0/26
IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual
machines and the Azure NetApp Files volumes are deployed in close proximity.
SIZ E O F SIZ E O F
VO L UM E P REM IUM STO RA GE T IER ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L
/hana/shared Max (512 GB, 1xRAM) per 4 Max (512 GB, 1xRAM) per 4 v3 or v4.1
worker nodes worker nodes
The SAP HANA configuration for the layout that's presented in this article, using Azure NetApp Files Ultra Storage
tier, would be:
SIZ E O F
VO L UM E ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L
NOTE
The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP
recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not
be sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific
workload.
TIP
You can resize Azure NetApp Files volumes dynamically, without having to unmount the volumes, stop the virtual
machines, or stop SAP HANA. This approach allows flexibility to meet both the expected and unforeseen throughput
demands of your application.
The next instructions assume that you've already created the resource group, the Azure virtual network, and the
three Azure virtual network subnets: client , storage and hana . When you deploy the VMs, select the client
subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an
explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway.
IMPORTANT
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP
HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
1. Create an availability set for SAP HANA. Make sure to set the max update domain.
2. Create three virtual machines (hanadb1 , hanadb2 , hanadb3 ) by doing the following steps:
a. Use a SLES4SAP image in the Azure gallery that's supported for SAP HANA. We used a SLES4SAP 12
SP4 image in this example.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically generated. In these
instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached
to the client Azure virtual network subnet, as hanadb1-client , hanadb2-client , and hanadb3-client .
3. Create three network interfaces, one for each virtual machine, for the storage virtual network subnet (in
this example, hanadb1-storage , hanadb2-storage , and hanadb3-storage ).
4. Create three network interfaces, one for each virtual machine, for the hana virtual network subnet (in this
example, hanadb1-hana , hanadb2-hana , and hanadb3-hana ).
5. Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the
following steps:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hanadb1 ),
and then select the virtual machine.
c. In the Over view pane, select Stop to deallocate the virtual machine.
d. Select Networking , and then attach the network interface. In the Attach network interface drop-
down list, select the already created network interfaces for the storage and hana subnets.
e. Select Save .
f. Repeat steps b through e for the remaining virtual machines (in our example, hanadb2 and hanadb3 ).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all
newly attached network interfaces.
6. Enable accelerated networking for the additional network interfaces for the storage and hana subnets by
doing the following steps:
a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces,
which are attached to the storage and hana subnets.
# Storage
10.23.2.4 hanadb1-storage
10.23.2.5 hanadb2-storage
10.23.2.6 hanadb3-storage
# Client
10.23.0.5 hanadb1
10.23.0.6 hanadb2
10.23.0.7 hanadb3
# Hana
10.23.3.4 hanadb1-hana
10.23.3.5 hanadb2-hana
10.23.3.6 hanadb3-hana
2. [A] Change DHCP and cloud config settings for the network interface for storage to avoid unintended
hostname changes.
The following instructions assume that the storage network interface is eth1 .
vi /etc/sysconfig/network/dhcp
# Change the following DHCP setting to "no"
DHCLIENT_SET_HOSTNAME="no"
vi /etc/sysconfig/network/ifcfg-eth1
# Edit ifcfg-eth1
#Change CLOUD_NETCONFIG_MANAGE='yes' to "no"
CLOUD_NETCONFIG_MANAGE='no'
3. [A] Add a network route, so that the communication to the Azure NetApp Files goes via the storage
network interface.
The following instructions assume that the storage network interface is eth1 .
vi /etc/sysconfig/network/ifroute-eth1
# Add the following routes
# RouterIPforStorageNetwork - - -
# ANFNetwork/cidr RouterIPforStorageNetwork - -
10.23.2.1 - - -
10.23.1.0/26 10.23.2.1 - -
vi /etc/sysctl.d/netapp-hana.conf
# Add the following entries in the configuration file
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
5. [A] Create configuration file /etc/sysctl.d/ms-az.conf with Microsoft for Azure configuration settings.
vi /etc/sysctl.d/ms-az.conf
# Add the following entries in the configuration file
ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.ip_local_port_range = 40000 65300
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10
6. [A] Adjust the sunrpc settings, as recommended in the NetApp SAP Applications on Microsoft Azure using
Azure NetApp Files.
vi /etc/modprobe.d/sunrpc.conf
# Insert the following line
options sunrpc tcp_max_slot_table_entries=128
mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1
3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .
4. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.23.1.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
5. [A] Create the SAP HANA group and user manually. The IDs for group sapsys and user hn1 adm must be
set to the same IDs, which are provided during the onboarding. (In this example, the IDs are set to 1001 .) If
the IDs aren't set correctly, you won't be able to access the volumes. The IDs for group sapsys and user
accounts hn1 adm and sapadm must be the same on all virtual machines.
sudo vi /etc/fstab
# Add the following entries
10.23.1.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a
sudo vi /etc/fstab
# Add the following entries
10.23.1.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a
sudo vi /etc/fstab
# Add the following entries
10.23.1.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a
10. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4 .
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.23.1.5:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.5
/hana/log/HN1/mnt00002 from 10.23.1.6:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6
/hana/data/HN1/mnt00002 from 10.23.1.6:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6
/hana/log/HN1/mnt00001 from 10.23.1.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4
/usr/sap/HN1 from 10.23.1.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4
/hana/shared from 10.23.1.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4
Installation
In this example for deploying SAP HANA in scale-out configuration with standby node with Azure, we've used
HANA 2.0 SP4.
Prepare for HANA installation
1. [A] Before the HANA installation, set the root password. You can disable the root password after the
installation has been completed. Execute as root command passwd .
2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3 , without being prompted for a password.
ssh root@hanadb2
ssh root@hanadb3
3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note
2593824.
4. [2], [3] Change ownership of SAP HANA data and log directories to hn1 adm.
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1
HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation and Update guide. In
this example, we install SAP HANA scale-out with master, one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use the internal_network
parameter and pass the address space for subnet, which is used for the internal HANA inter-node
communication.
./hdblcm --internal_network=10.23.3.0/24
3. [1] Add host mapping to ensure that the client IP addresses are used for client communication. Add
section public_host_resolution , and add the corresponding IP addresses from the client subnet.
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[public_hostname_resolution]
map_hanadb1 = 10.23.0.5
map_hanadb2 = 10.23.0.6
map_hanadb3 = 10.23.0.7
5. [1] Verify that the client interface will be using the IP addresses from the client subnet for
communication.
For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP
HANA internal network.
6. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the following SAP HANA
parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocksall
For more information, see NetApp SAP Applications on Microsoft Azure using Azure NetApp Files.
Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini . For more information, see
SAP Note 1999930.
For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set during the installation,
as described in SAP Note 2267798.
7. The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is
not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file
size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result
in errors and, eventually, in an index server crash.
IMPORTANT
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of the storage subsystem, set the
following parameters in global.ini .
datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP Note 2400005. Be aware of SAP Note
2631285.
b. To simulate a node crash, run the following command as root on the worker node, which is hanadb2 in
this case:
c. Monitor the system for failover completion. When the failover has been completed, capture the status,
which should look like the following:
#Landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
b. Run the following commands as hn1 adm on the active master node, which is hanadb1 in this case:
The standby node hanadb3 will take over as master node. Here is the resource state after the failover test
is completed:
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group |
Role | Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -
--------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 | 0 | default | default |
master 1 | slave | worker | standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 | 1 | default | default |
master 3 | master | standby | master | standby | worker | default | default |
c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine, where the name server
was killed). The hanadb1 node will rejoin the environment and will keep its standby role.
After SAP HANA has started on hanadb1 , expect the following status:
d. Again, kill the name server on the currently active master node (that is, on node hanadb3 ).
hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill
Node hanadb1 will resume the role of master node. After the failover test has been completed, the status
will look like this:
e. Start SAP HANA on hanadb3 , which will be ready to serve as a standby node.
After SAP HANA has started on hanadb3 , the status looks like the following:
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs).
Deploy a SAP HANA scale-out system with standby
node on Azure VMs by using Azure NetApp Files on
Red Hat Enterprise Linux
12/22/2020 • 34 minutes to read • Edit Online
This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with
standby on Azure Red Hat Enterprise Linux virtual machines (VMs), by using Azure NetApp Files for the shared
storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system
ID is HN1 . The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6.
NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.
Before you begin, refer to the following SAP notes and papers:
Azure NetApp Files documentation
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP
SAP Note 1900823: Contains information about SAP HANA storage requirements
SAP Community Wiki: Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Red Hat Enterprise Linux Networking Guide
Azure-specific RHEL documentation:
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
One method for achieving HANA high availability is by configuring host auto failover. To configure host auto
failover, you add one or more virtual machines to the HANA system and configure them as standby nodes. When
active node fails, a standby node automatically takes over. In the presented configuration with Azure virtual
machines, you achieve auto failover by using NFS on Azure NetApp Files.
NOTE
The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The
improved file lease-based locking mechanism in the NFSv4 protocol is used for I/O fencing.
IMPORTANT
To build the supported configuration, you must deploy the HANA data and log volumes as NFSv4.1 volumes and mount
them by using the NFSv4.1 protocol. The HANA host auto-failover configuration with standby node is not supported with
NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented
within one Azure virtual network:
For client communication
For communication with the storage system
For internal HANA inter-node communication
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
For this example configuration, the subnets are:
client 10.9.1.0/26
storage 10.9.3.0/26
hana 10.9.2.0/26
anf 10.9.0.0/26 (delegated subnet to Azure NetApp Files)
IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual
machines and the Azure NetApp Files volumes are deployed in close proximity.
/hana/shared 1xRAM per 4 worker nodes 1xRAM per 4 worker nodes v3 or v4.1
The SAP HANA configuration for the layout that's presented in this article, using Azure NetApp Files Ultra Storage
tier, would be:
SIZ E O F
VO L UM E ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L
NOTE
The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP
recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not
be sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific
workload.
TIP
You can resize Azure NetApp Files volumes dynamically, without having to unmount the volumes, stop the virtual
machines, or stop SAP HANA. This approach allows flexibility to meet both the expected and unforeseen throughput
demands of your application.
The next instructions assume that you've already created the resource group, the Azure virtual network, and the
three Azure virtual network subnets: client , storage and hana . When you deploy the VMs, select the client
subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an
explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway.
IMPORTANT
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP
HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
1. Create an availability set for SAP HANA. Make sure to set the max update domain.
2. Create three virtual machines (hanadb1 , hanadb2 , hanadb3 ) by doing the following steps:
a. Use a Red Hat Enterprise Linux image in the Azure gallery that's supported for SAP HANA. We used a
RHEL-SAP-HA 7.6 image in this example.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically generated. In these
instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached
to the client Azure virtual network subnet, as hanadb1-client , hanadb2-client , and hanadb3-client .
3. Create three network interfaces, one for each virtual machine, for the storage virtual network subnet (in
this example, hanadb1-storage , hanadb2-storage , and hanadb3-storage ).
4. Create three network interfaces, one for each virtual machine, for the hana virtual network subnet (in this
example, hanadb1-hana , hanadb2-hana , and hanadb3-hana ).
5. Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the
following steps:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hanadb1 ),
and then select the virtual machine.
c. In the Over view pane, select Stop to deallocate the virtual machine.
d. Select Networking , and then attach the network interface. In the Attach network interface drop-
down list, select the already created network interfaces for the storage and hana subnets.
e. Select Save .
f. Repeat steps b through e for the remaining virtual machines (in our example, hanadb2 and hanadb3 ).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all
newly attached network interfaces.
6. Enable accelerated networking for the additional network interfaces for the storage and hana subnets by
doing the following steps:
a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces,
which are attached to the storage and hana subnets.
# Storage
10.9.3.4 hanadb1-storage
10.9.3.5 hanadb2-storage
10.9.3.6 hanadb3-storage
# Client
10.9.1.5 hanadb1
10.9.1.6 hanadb2
10.9.1.7 hanadb3
# Hana
10.9.2.4 hanadb1-hana
10.9.2.5 hanadb2-hana
10.9.2.6 hanadb3-hana
2. [A] Add a network route, so that the communication to the Azure NetApp Files goes via the storage
network interface.
In this example will use Networkmanager to configure the additional network route. The following
instructions assume that the storage network interface is eth1 .
First, determine the connection name for device eth1 . In this example the connection name for device
eth1 is Wired connection 1 .
# Execute as root
nmcli connection
# Result
#NAME UUID TYPE DEVICE
#System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0
#Wired connection 1 4b0789d1-6146-32eb-83a1-94d61f8d60a7 ethernet eth1
Then configure additional route to the Azure NetApp Files delegated network via eth1 .
4. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in NetApp SAP
Applications on Microsoft Azure using Azure NetApp Files. Create configuration file /etc/sysctl.d/netapp-
hana.conf for the NetApp configuration settings.
vi /etc/sysctl.d/netapp-hana.conf
# Add the following entries in the configuration file
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
6. [A] Adjust the sunrpc settings, as recommended in the NetApp SAP Applications on Microsoft Azure using
Azure NetApp Files.
vi /etc/modprobe.d/sunrpc.conf
# Insert the following line
options sunrpc tcp_max_slot_table_entries=128
NOTE
If installing HANA 2.0 SP04 you will be required to install package compat-sap-c++-7 as described in SAP note
2593824, before you can install SAP HANA.
mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1
3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .
4. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a
9. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4 .
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.9.0.4:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/log/HN1/mnt00002 from 10.9.0.4:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/data/HN1/mnt00002 from 10.9.0.4:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/log/HN1/mnt00001 from 10.9.0.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/usr/sap/HN1 from 10.9.0.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/shared from 10.9.0.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
Installation
In this example for deploying SAP HANA in scale-out configuration with standby node with Azure, we've used
HANA 2.0 SP4.
Prepare for HANA installation
1. [A] Before the HANA installation, set the root password. You can disable the root password after the
installation has been completed. Execute as root command passwd .
2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3 , without being prompted for a password.
ssh root@hanadb2
ssh root@hanadb3
3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note
2593824.
4. [2], [3] Change ownership of SAP HANA data and log directories to hn1 adm.
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1
5. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-
enable it, after the HANA installation is done.
# Execute as root
systemctl stop firewalld
systemctl disable firewalld
HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation and Update guide. In
this example, we install SAP HANA scale-out with master, one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use the internal_network
parameter and pass the address space for subnet, which is used for the internal HANA inter-node
communication.
./hdblcm --internal_network=10.9.2.0/26
3. [1] Add host mapping to ensure that the client IP addresses are used for client communication. Add
section public_host_resolution , and add the corresponding IP addresses from the client subnet.
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[public_hostname_resolution]
map_hanadb1 = 10.9.1.5
map_hanadb2 = 10.9.1.6
map_hanadb3 = 10.9.1.7
5. [1] Verify that the client interface will be using the IP addresses from the client subnet for
communication.
# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from
SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result
"hanadb3","net_publicname","10.9.1.7"
"hanadb2","net_publicname","10.9.1.6"
"hanadb1","net_publicname","10.9.1.5"
For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP
HANA internal network.
6. [A] Re-enable the firewall.
Stop HANA
# Execute as root
systemctl start firewalld
systemctl enable firewalld
IMPORTANT
Create firewall rules to allow HANA inter node communication and client traffic. The required ports are listed
on TCP/IP Ports of All SAP Products. The following commands are just an example. In this scenario with used
system number 03.
# Execute as root
sudo firewall-cmd --zone=public --add-port=30301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30301/tcp
sudo firewall-cmd --zone=public --add-port=30303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30303/tcp
sudo firewall-cmd --zone=public --add-port=30306/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30306/tcp
sudo firewall-cmd --zone=public --add-port=30307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30307/tcp
sudo firewall-cmd --zone=public --add-port=30313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30313/tcp
sudo firewall-cmd --zone=public --add-port=30315/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30315/tcp
sudo firewall-cmd --zone=public --add-port=30317/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30317/tcp
sudo firewall-cmd --zone=public --add-port=30340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30340/tcp
sudo firewall-cmd --zone=public --add-port=30341/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30341/tcp
sudo firewall-cmd --zone=public --add-port=30342/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30342/tcp
sudo firewall-cmd --zone=public --add-port=1128/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1128/tcp
sudo firewall-cmd --zone=public --add-port=1129/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1129/tcp
sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40302/tcp
sudo firewall-cmd --zone=public --add-port=40301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40301/tcp
sudo firewall-cmd --zone=public --add-port=40307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40307/tcp
sudo firewall-cmd --zone=public --add-port=40303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40303/tcp
sudo firewall-cmd --zone=public --add-port=40340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40340/tcp
sudo firewall-cmd --zone=public --add-port=50313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50313/tcp
sudo firewall-cmd --zone=public --add-port=50314/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50314/tcp
sudo firewall-cmd --zone=public --add-port=30310/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30310/tcp
sudo firewall-cmd --zone=public --add-port=30302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30302/tcp
Start HANA
7. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the following SAP HANA
parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For more information, see NetApp SAP Applications on Microsoft Azure using Azure NetApp Files.
Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini . For more information, see
SAP Note 1999930.
For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set during the installation,
as described in SAP Note 2267798.
8. The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is
not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file
size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result
in errors and, eventually, in an index server crash.
IMPORTANT
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of the storage subsystem, set the
following parameters in global.ini .
datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP Note 2400005. Be aware of SAP Note
2631285.
b. To simulate a node crash, run the following command as root on the worker node, which is hanadb2 in
this case:
IMPORTANT
When a node experiences kernel panic, avoid delays with SAP HANA failover by setting kernel.panic to 20
seconds on all HANA virtual machines. The configuration is done in /etc/sysctl . Reboot the virtual machines to
activate the change. If this change isn't performed, failover can take 10 or more minutes when a node is
experiencing kernel panic.
b. Run the following commands as hn1 adm on the active master node, which is hanadb1 in this case:
The standby node hanadb3 will take over as master node. Here is the resource state after the failover test
is completed:
c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine, where the name server
was killed). The hanadb1 node will rejoin the environment and will keep its standby role.
After SAP HANA has started on hanadb1 , expect the following status:
d. Again, kill the name server on the currently active master node (that is, on node hanadb3 ).
Node hanadb1 will resume the role of master node. After the failover test has been completed, the status
will look like this:
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
e. Start SAP HANA on hanadb3 , which will be ready to serve as a standby node.
After SAP HANA has started on hanadb3 , the status looks like the following:
Getting Started
The backup guide for SAP HANA running on Azure virtual Machines will only describe Azure-specific topics. For
general SAP HANA backup related items, check the SAP HANA documentation. We expect you to be familiar with
principle database backup strategies, the reasons, and motivations to have a sound and valid backup strategy,
and are aware of the requirements your company has for the backup procedure, retention period of backups and
restore procedure.
SAP HANA is officially supported on various Azure VM types, like Azure M-Series. For a complete list of SAP
HANA certified Azure VMs and HANA Large Instance units, check out Find Certified IaaS Platforms. Microsoft
Azure offers a number of units where SAP HANA runs non-virtualized on physical servers. This service is called
HANA Large Instances. This guide will not cover backup processes and tools for HANA Large Instances. But is
going to be limited to Azure virtual machines. For details about backup/restore processes with HANA Large
Instances, read the article HLI Backup and Restore.
The focus of this article is on three backup possibilities for SAP HANA on Azure virtual machines:
HANA backup through Azure Backup Services
HANA backup to the file system in an Azure Linux Virtual Machine (see SAP HANA Azure Backup on file level)
HANA backup based on storage snapshots using the Azure storage blob snapshot feature manually or Azure
Backup service
SAP HANA offers a backup API, which allows third-party backup tools to integrate directly with SAP HANA.
Products like Azure Backup service, or Commvault are using this proprietary interface to trigger SAP HANA
database or redo log backups.
Information on how you can find what SAP software is supported on Azure can be found in the article What SAP
software is supported for Azure deployments.
NOTE
Disk snapshot based backups for SAP HANA in deployments where multiple database containers are used, require a
minimum release of HANA 2.0 SP04
This figure shows options for taking an SAP HANA file backup inside the VM, and then storing it HANA backup
files somewhere else using different tools. However, all solutions not involving a third-party backup service or
Azure Backup service have several hurdles in common. Some of them can be listed, like retention administration,
automatic restore process and providing automatic point-in-time recovery as Azure Backup service or other
specialized third-party backup suites and services provide. Many of those third-party services being able to run
on Azure.
NOTE
Disk snapshot based backups for SAP HANA in deployments where multiple database containers are used, require a
minimum release of HANA 2.0 SP04
Azure storage, does not provide file system consistency across multiple disks or volumes that are attached to a
VM during the snapshot process. That means the application consistency during the snapshot needs to be
delivered by the application, in this case SAP HANA itself. SAP Note 2039883 has important information about
SAP HANA backups by storage snapshots. For example, with XFS file systems, it is necessary to run xfs_freeze
before starting a storage snapshot to provide application consistency (see xfs_freeze(8) - Linux man page for
details on xfs_freeze ).
Assuming there is an XFS file system spanning four Azure virtual disks, the following steps provide a consistent
snapshot that represents the HANA data area:
1. Create HANA data snapshot prepare
2. Freeze the file systems of all disks/volumes (for example, use xfs_freeze )
3. Create all necessary blob snapshots on Azure
4. Unfreeze the file system
5. Confirm the HANA data snapshot (will delete the snapshot)
When using the Azure Backup's capability to perform application consistent snapshot backups, steps #1 need to
be coded/scripted by you in for the pre-snapshot script. Azure Backup service will execute steps #2 and #3. Steps
#4 and #5 need to be again provided by your code in the post-snapshot script. If you are not using Azure backup
service, you also need to code/script step #2 and #3 on your own. More information on creating HANA data
snapshots can be found in these articles:
[HANA data snapshots](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-
US/ac114d4b34d542b99bc390b34f8ef375.html
More details to perform step #1 can be found in article Create a Data Snapshot (Native SQL)
Details to confirm/delete HANA data snapshots as need in step #5 can be found in the article Create a Data
Snapshot (Native SQL)
It is important to confirm the HANA snapshot. Due to the "Copy-on-Write," SAP HANA might not require
additional disk space while in this snapshot-prepare mode. It's also not possible to start new backups until the
SAP HANA snapshot is confirmed.
SAP HANA backup scheduling strategy
The SAP HANA article Planning Your Backup and Recovery Strategy states a basic plan to do backups. Rely on
SAP documentation around HANA and your experiences with other DBMS in defining the backup/restore
strategy and process for SAP HANA. The sequence of different types of backups, and the retention period are
highly dependent on the SLAs you need to provide.
SAP HANA backup encryption
SAP HANA offers encryption of data and log. If SAP HANA data and log are not encrypted, then the backups are
not encrypted by default. However, SAP HANA offers a separate backup encryption as documented in SAP HANA
Backup Encryption. If you are running older releases of SAP HANA, you might need to check whether backup
encryption was part of the functionality provided already.
Next steps
SAP HANA Azure Backup on file level describes the file-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP HANA Azure Backup on file level
12/22/2020 • 8 minutes to read • Edit Online
Introduction
This article is a related article to Backup guide for SAP HANA on Azure Virtual Machines, which provides an
overview and information on getting started and more details on Azure Backup service and storage snapshots.
Different VM types in Azure allow a different number of VHDs attached. The exact details are documented in Sizes
for Linux virtual machines in Azure. For the tests referred to in this documentation we used a GS5 Azure VM,
which allows 64 attached data disks. For larger SAP HANA systems, a significant number of disks might already be
taken for data and log files, possibly in combination with software striping for optimal disk IO throughput. For
more details on suggested disk configurations for SAP HANA deployments on Azure VMs, read the article SAP
HANA Azure virtual machine storage configurations. The recommendations made are including disk space
recommendations for local backups as well.
The standard way to manage backup/restore at the file level is with a file-based backup via SAP HANA Studio or
via SAP HANA SQL statements. For more information, read the article SAP HANA SQL and System Views
Reference.
This figure shows the dialog of the backup menu item in SAP HANA Studio. When choosing type "file," one has to
specify a path in the file system where SAP HANA writes the backup files. Restore works the same way.
While this choice sounds simple and straight forward, there are some considerations. An Azure VM has a
limitation of number of data disks that can be attached. There might not be capacity to store SAP HANA backup
files on the file systems of the VM, depending on the size of the database and disk throughput requirements,
which might involve software striping across multiple data disks. Various options for moving these backup files,
and managing file size restrictions and performance when handling terabytes of data, are provided later in this
article.
Another option, which offers more freedom regarding total capacity, is Azure blob storage. While a single blob is
also restricted to 1 TB, the total capacity of a single blob container is currently 500 TB. Additionally, it gives
customers the choice to select so-called "cool" blob storage, which has a cost benefit. See Azure Blob storage: hot,
cool, and archive access tiers for details about cool blob storage.
For additional safety, use a geo-replicated storage account to store the SAP HANA backups. See Azure Storage
redundancy for details about storage redundancy and storage replication.
One could place dedicated VHDs for SAP HANA backups in a dedicated backup storage account that is geo-
replicated. Or else one could copy the VHDs that keep the SAP HANA backups to a geo-replicated storage account,
or to a storage account that is in a different region.
This screenshot shows the SAP HANA backup console of SAP HANA Studio. It took about 42 minutes to perform a
backup of 230 GB on a single Azure Standard HDD storage disk attached to the HANA VM using the XFS file
system on the one disk.
This screenshot is of YaST on the SAP HANA test VM. You can see the 1-TB single disk for SAP HANA backup. It
took about 42 minutes to backup 230 GB. In addition, five 200-GB disks were attached and software RAID md0
created, with striping on top of these five Azure data disks.
Repeating the same backup on software RAID with striping across five attached Azure standard storage data disks
brought the backup time from 42 minutes down to 10 minutes. The disks were attached without caching to the
VM. This exercise demonstrates the importance of disk write throughput for achieving good backup time. You
could switch to Azure Standard SSD storage or Azure Premium Storage to further accelerate the process for
optimal performance. In general, Azure standard HDD storage is not recommended and was used for
demonstration purposes only. Recommendation is to use a minimum of Azure Standard SSD storage or Azure
Premium Storage for production systems.
You can see the files of a full SAP HANA file backup. Of the four files, the biggest one has roughly 230 GB size.
Not using md5 hash in the initial test, it took roughly 3000 seconds to copy the 230 GB to an Azure standard
storage account blob container.
The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample
environment, it improved performance by having multiple smaller backup files, instead of one large 230-GB file.
Setting the backup file size limit on the HANA side doesn't improve the backup time, because the files are written
sequentially. The file size limit was set to 60 GB, so the backup created four large data files instead of the 230-GB
single file. Using multiple backup files can become a necessity for backing up HANA databases if your backup
targets have limitations on file sizes of blob sizes.
To test parallelism of the blobxfer tool, the max file size for HANA backups was then set to 15 GB, which resulted in
19 backup files. This configuration brought the time for blobxfer to copy the 230 GB to Azure blob storage from
3000 seconds down to 875 seconds.
As you are exploring copying backups performed against local disks to other locations, like Azure blob storage,
keep in mind that the bandwidth used by an eventual parallel copy process is accounting against the network or
storage quota of your individual VM type. As a result, you need to balance the duration of the copy against the
network and storage bandwidth the normal workload running in the VM is requiring.
NOTE
SAP HANA support NFS v3 and NFS v4.x. Any other format like SMB with CIFS file system is not supported to write HANA
backups against. See also SAP support note #1820529
NOTE
SMB with CIFS file system is not supported by SAP HANA to write HANA backups against. See also SAP support note
#1820529. As a result, you only can use this solution as final destination of a HANA database backup that has been
conducted directly against local attached disks
In a test conducted against Azure Files, not Azure Premium Files it took around 929 seconds to copy 19 backup
files with an overall volume of 230 GB. We expect the time using Azure Premium Files being way better. However,
you need to keep in mind that you need to balance the interests of a fast copy with the requirements your
workload has on network bandwidth. Since every Azure VM type enforces network bandwidth quota, you need to
stay within the range of that quota with your workload plus the copy of the backup files.
Storing SAP HANA backup files on Azure files could be an interesting option. Especially with the improved latency
and throughput of Azure Premium Files.
Next steps
Backup guide for SAP HANA on Azure Virtual Machines gives an overview and information on getting started.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP workloads on Azure: planning and deployment
checklist
12/22/2020 • 28 minutes to read • Edit Online
This checklist is designed for customers moving SAP NetWeaver, S/4HANA, and Hybris applications to Azure
infrastructure as a service. Throughout the duration of the project, a customer and/or SAP partner should review
the checklist. It's important to note that many of the checks are completed at the beginning of the project and
during the planning phase. After the deployment is done, straightforward changes on deployed Azure
infrastructure or SAP software releases can become complex.
Review the checklist at key milestones during your project. Doing so will enable you to detect small problems
before they become large problems. You'll also have enough time to re-engineer and test any necessary changes.
Don't consider this checklist complete. Depending on your situation, you might need to perform many more
checks.
The checklist doesn't include tasks that are independent of Azure. For example, SAP application interfaces change
during a move to the Azure platform or to a hosting provider.
This checklist can also be used for systems that are already deployed. New features, like Write Accelerator and
Availability Zones, and new VM types might have been added since you deployed. So it's useful to review the
checklist periodically to ensure you're aware of new features in the Azure platform.
Non-production phase
In this phase, we assume that after a successful pilot or proof of concept (POC), you're starting to deploy non-
production SAP systems to Azure. Incorporate everything you learned and experienced during the POC to this
deployment. All the criteria and steps listed for POCs apply to this deployment as well.
During this phase, you usually deploy development systems, unit testing systems, and business regression testing
systems to Azure. We recommend that at least one non-production system in one SAP application line has the full
high availability configuration that the future production system will have. Here are some additional steps that you
need to complete during this phase:
1. Before you move systems from the old platform to Azure, collect resource consumption data, like CPU usage,
storage throughput, and IOPS data. Especially collect this data from the DBMS layer units, but also collect it
from the application layer units. Also measure network and storage latency.
2. Record the availability usage time patterns of your systems. The goal is to figure out whether non-production
systems need to be available all day, every day or whether there are non-production systems that can be shut
down during certain phases of a week or month.
3. Test and determine whether you want to create your own OS images for your VMs in Azure or whether you
want to use an image from the Azure Shared Image Gallery. If you're using an image from the Shared Image
Gallery, make sure to use an image that reflects the support contract with your OS vendor. For some OS
vendors, Shared Image Gallery lets you bring your own license images. For other OS images, support is
included in the price quoted by Azure. If you decide to create your own OS images, you can find documentation
in these articles:
Build a generalized image of a Windows VM deployed in Azure
Build a generalized image of a Linux VM deployed in Azure
4. If you use SUSE and Red Hat Linux images from the Shared Image Gallery, you need to use the images for SAP
provided by the Linux vendors in the Shared Image Gallery.
5. Make sure to fulfill the SAP support requirements for Microsoft support agreements. See SAP support note
#2015553. For HANA Large Instances, see Onboarding requirements.
6. Make sure the right people get planned maintenance notifications so you can choose the best downtimes.
7. Frequently check for Azure presentations on channels like Channel 9 for new functionality that might apply to
your deployments.
8. Check SAP notes related to Azure, like support note #1928533, for new VM SKUs and newly supported OS and
DBMS releases. Compare the pricing of new VM types against that of older VM types, so you can deploy VMs
with the best price/performance ratio.
9. Recheck SAP support notes, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no
changes in supported VMs for Azure, supported OS releases on those VMs, and supported SAP and DBMS
releases.
10. Check the SAP website for new HANA-certified SKUs in Azure. Compare the pricing of new SKUs with the ones
you planned to use. Eventually, make necessary changes to use the ones that have the best price/performance
ratio.
11. Adapt your deployment scripts to use new VM types and incorporate new Azure features that you want to use.
12. After deployment of the infrastructure, test and evaluate the network latency between SAP application layer
VMs and DBMS VMs, according to SAP support notes #500235 and #1100926. Evaluate the results against the
network latency guidance in SAP support note #1100926. The network latency should be in the moderate or
good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in this
article. Make sure that none of the restrictions mentioned in Considerations for Azure Virtual Machines DBMS
deployment for SAP workloads and SAP HANA infrastructure configurations and operations on Azure apply to
your deployment.
13. Make sure your VMs are deployed to the correct Azure proximity placement group, as described in Azure
proximity placement groups for optimal network latency with SAP applications.
14. Perform all the other checks listed for the proof of concept phase before applying the workload.
15. As the workload applies, record the resource consumption of the systems in Azure. Compare this consumption
with records from your old platform. Adjust VM sizing of future deployments if you see that you have large
differences. Keep in mind that when you downsize, storage, and network bandwidths of VMs will be reduced as
well.
Sizes for Windows virtual machines in Azure
Sizes for Linux virtual machines in Azure
16. Experiment with system copy functionality and processes. The goal is to make it easy for you to copy a
development system or a test system, so project teams can get new systems quickly.
17. Optimize and hone your team's Azure role-based access, permissions, and processes to make sure you have
separation of duties. At the same time, make sure all teams can perform their tasks in the Azure infrastructure.
18. Exercise, test, and document high-availability and disaster recovery procedures to enable your staff to execute
these tasks. Identify shortcomings and adapt new Azure functionality that you're integrating into your
deployments.
Production preparation phase
In this phase, collect what you experienced and learned during your non-production deployments and apply it to
future production deployments. You also need to prepare the work of the data transfer between your current
hosting location and Azure.
1. Complete necessary SAP release upgrades of your production systems before moving to Azure.
2. Agree with the business owners on functional and business tests that need to be conducted after migration of
the production system.
3. Make sure these tests are completed with the source systems in the current hosting location. Avoid conducting
tests for the first time after the system is moved to Azure.
4. Test the process of migrating production systems to Azure. If you're not moving all production systems to
Azure during the same time frame, build groups of production systems that need to be at the same hosting
location. Test data migration. Here are some common methods:
Use DBMS methods like backup/restore in combination with SQL Server Always On, HANA System
Replication, or Log shipping to seed and synchronize database content in Azure.
Use backup/restore for smaller databases.
Use SAP Migration Monitor, which is integrated into SAP SWPM, to perform heterogeneous migrations.
Use the SAP DMO process if you need to combine your migration with an SAP release upgrade. Keep in
mind that not all combinations of source DBMS and target DBMS are supported. You can find more
information in the specific SAP support notes for the different releases of DMO. For example, Database
Migration Option (DMO) of SUM 2.0 SP04.
Test whether data transfer throughput is better through the internet or through ExpressRoute, in case
you need to move backups or SAP export files. If you're moving data through the internet, you might
need to change some of your network security group/application security group rules that you'll need to
have in place for future production systems.
5. Before moving systems from your old platform to Azure, collect resource consumption data. Useful data
includes CPU usage, storage throughput, and IOPS data. Especially collect this data from the DBMS layer units,
but also collect it from the application layer units. Also measure network and storage latency.
6. Recheck SAP support notes and the required OS settings, the SAP HANA hardware directory, and the SAP PAM.
Make sure there were no changes in supported VMs for Azure, supported OS releases in those VMs, and
supported SAP and DBMS releases.
7. Update deployment scripts to take into account the latest decisions you've made on VM types and Azure
functionality.
8. After deploying infrastructure and applications, validate that:
The correct VM types were deployed, with the correct attributes and storage sizes.
The VMs are on the correct and desired OS releases and patches and are uniform.
VMs are hardened as required and in a uniform way.
The correct application releases and patches were installed and deployed.
The VMs were deployed into Azure availability sets as planned.
Azure Premium Storage is used for latency-sensitive disks or where the single-VM SLA of 99.9% is
required.
Azure Write Accelerator is deployed correctly.
Make sure that, within the VMs, storage spaces, or stripe sets were built correctly across disks
that need Write Accelerator.
Check the configuration of software RAID on Linux.
Check the configuration of LVM on Linux VMs in Azure.
Azure managed disks are used exclusively.
VMs were deployed into the correct availability sets and Availability Zones.
Azure Accelerated Networking is enabled on the VMs used in the SAP application layer and the SAP
DBMS layer.
No Azure network virtual appliances are in the communication path between the SAP application and
the DBMS layer of SAP systems based on SAP NetWeaver, Hybris, or S/4HANA.
Application security group and network security group rules allow communication as desired and
planned and block communication where required.
Timeout settings are set correctly, as described earlier.
VMs are deployed to the correct Azure proximity placement group, as described in Azure proximity
placement groups for optimal network latency with SAP applications.
Network latency between SAP application layer VMs and DBMS VMs is tested and validated as
described in SAP support notes #500235 and #1100926. Evaluate the results against the network
latency guidance in SAP support note #1100926. The network latency should be in the moderate or
good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in
this article.
Encryption was implemented where necessary and with the appropriate encryption method.
Interfaces and other applications can connect the newly deployed infrastructure.
9. Create a playbook for reacting to planned Azure maintenance. Determine the order in which systems and VMs
should be rebooted for planned maintenance.
Go-live phase
During the go-live phase, be sure to follow the playbooks you developed during earlier phases. Execute the steps
that you tested and practiced. Don't accept last-minute changes in configurations and processes. Also complete
these steps:
1. Verify that Azure portal monitoring and other monitoring tools are working. We recommend Windows
Performance Monitor (perfmon) for Windows and SAR for Linux.
CPU counters.
Average CPU time, total (all CPUs)
Average CPU time, each individual processor (128 processors on M128 VMs)
CPU kernel time, each individual processor
CPU user time, each individual processor
Memory.
Free memory
Memory page in/second
Memory page out/second
Disk.
Disk read in KBps, per individual disk
Disk reads/second, per individual disk
Disk read in microseconds/read, per individual disk
Disk write in KBps, per individual disk
Disk write/second, per individual disk
Disk write in microseconds/read, per individual disk
Network.
Network packets in/second
Network packets out/second
Network KB in/second
Network KB out/second
2. After data migration, perform all the validation tests you agreed upon with the business owners. Accept
validation test results only when you have results for the original source systems.
3. Check whether interfaces are functioning and whether other applications can communicate with the newly
deployed production systems.
4. Check the transport and correction system through SAP transaction STMS.
5. Perform database backups after the system is released for production.
6. Perform VM backups for the SAP application layer VMs after the system is released for production.
7. For SAP systems that weren't part of the current go-live phase but that communicate with the SAP systems
that you moved to Azure during this go-live phase, you need to reset the host name buffer in SM51. Doing so
will remove the old cached IP addresses associated with the names of the application instances you moved to
Azure.
Post production
This phase is about monitoring, operating, and administering the system. From an SAP point of view, the usual
tasks that you were required to complete in your old hosting location apply. Complete these Azure-specific tasks
as well:
1. Review Azure invoices for high-charging systems.
2. Optimize price/performance efficiency on the VM side and the storage side.
3. Optimize the times when you can shut down systems.
Next steps
See these articles:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver
Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
Azure Virtual Machines planning and
implementation for SAP NetWeaver
12/22/2020 • 112 minutes to read • Edit Online
Microsoft Azure enables companies to acquire compute and storage resources in minimal time without
lengthy procurement cycles. Azure Virtual Machine service allows companies to deploy classical
applications, like SAP NetWeaver based applications into Azure and extend their reliability and
availability without having further resources available on-premises. Azure Virtual Machine Services
also supports cross-premises connectivity, which enables companies to actively integrate Azure Virtual
Machines into their on-premises domains, their Private Clouds and their SAP System Landscape. This
white paper describes the fundamentals of Microsoft Azure Virtual Machine and provides a walk-
through of planning and implementation considerations for SAP NetWeaver installations in Azure and
as such should be the document to read before starting actual deployments of SAP NetWeaver on
Azure. The paper complements the SAP Installation Documentation and SAP Notes, which represent
the primary resources for installations and deployments of SAP software on given platforms.
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM
module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az
module and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. For Az module
installation instructions, see Install Azure PowerShell.
Summary
Cloud Computing is a widely used term, which is gaining more and more importance within the IT
industry, from small companies up to large and multinational corporations.
Microsoft Azure is the Cloud Services Platform from Microsoft, which offers a wide spectrum of new
possibilities. Now customers are able to rapidly provision and de-provision applications as a service in
the cloud, so they are not limited to technical or budgeting restrictions. Instead of investing time and
budget into hardware infrastructure, companies can focus on the application, business processes, and
its benefits for customers and users.
With Microsoft Azure Virtual Machine Services, Microsoft offers a comprehensive Infrastructure as a
Service (IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines
(IaaS). This whitepaper describes how to plan and implement SAP NetWeaver based applications
within Microsoft Azure as the platform of choice.
The paper itself focuses on two main aspects:
The first part describes two supported deployment patterns for SAP NetWeaver based applications
on Azure. It also describes general handling of Azure with SAP deployments in mind.
The second part details implementing the different scenarios described in the first part.
For additional resources, see chapter Resources in this document.
Definitions upfront
Throughout the document, we use the following terms:
IaaS: Infrastructure as a Service
PaaS: Platform as a Service
SaaS: Software as a Service
SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or S/4HANA.
SAP components can be based on traditional ABAP or Java technologies or a non-NetWeaver based
application such as Business Objects.
SAP Environment: one or more SAP components logically grouped to perform a business function
such as Development, QAS, Training, DR, or Production.
SAP Landscape: This term refers to the entire SAP assets in a customer's IT landscape. The SAP
landscape includes all production and non-production environments.
SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP
development system, SAP BW test system, SAP CRM production system, etc. In Azure deployments,
it is not supported to divide these two layers between on-premises and Azure. Means an SAP
system is either deployed on-premises or it is deployed in Azure. However, you can deploy the
different systems of an SAP landscape into either Azure or on-premises. For example, you could
deploy the SAP CRM development and test systems in Azure but the SAP CRM production system
on-premises.
Cross-premises or hybrid: Describes a scenario where VMs are deployed to an Azure subscription
that has site-to-site, multi-site, or ExpressRoute connectivity between the on-premises datacenter(s)
and Azure. In common Azure documentation, these kinds of deployments are also described as
cross-premises or hybrid scenarios. The reason for the connection is to extend on-premises
domains, on-premises Active Directory/OpenLDAP, and on-premises DNS into Azure. The on-
premises landscape is extended to the Azure assets of the subscription. Having this extension, the
VMs can be part of the on-premises domain. Domain users of the on-premises domain can access
the servers and can run services on those VMs (like DBMS services). Communication and name
resolution between VMs deployed on-premises and Azure deployed VMs is possible. This is the
most common and nearly exclusive case deploying SAP assets into Azure. For more information,
see this article and this.
Azure Monitoring Extension, Enhanced Monitoring, and Azure Extension for SAP: Describe one and
the same item. It describes a VM extension that needs to be deployed by you to provide some basic
data about the Azure infrastructure to the SAP Host Agent. SAP in SAP notes might refer to it as
Monitoring Extension or Enhanced monitoring. In Azure, we are referring to it as Azure Extension
for SAP .
NOTE
Cross-premises or hybrid deployments of SAP systems where Azure Virtual Machines running SAP systems are
members of an on-premises domain are supported for production SAP systems. Cross-premises or hybrid
configurations are supported for deploying parts or complete SAP landscapes into Azure. Even running the
complete SAP landscape in Azure requires having those VMs being part of on-premises domain and
ADS/OpenLDAP.
Resources
The entry point for SAP workload on Azure documentation is found here. Starting with this entry point
you find many articles that cover the topics of:
SAP NetWeaver and Business One on Azure
SAP DBMS guides for various DBMS systems in Azure
High availability and disaster recovery for SAP workload on Azure
Specific guidance for running SAP HANA on Azure
Guidance specific to Azure HANA Large Instances for the SAP HANA DBMS
IMPORTANT
Wherever possible a link to the referring SAP Installation Guides or other SAP documentation is used (Reference
InstGuide-01, see http://service.sap.com/instguides). When it comes to the prerequisites, installation process, or
details of specific SAP functionality the SAP documentation and guides should always be read carefully, as the
Microsoft documents only covers specific tasks for SAP software installed and operated in a Microsoft Azure
Virtual Machine.
The following SAP Notes are related to the topic of SAP on Azure:
N OT E N UM B ER T IT L E
Also read the SCN Wiki that contains all SAP Notes for Linux.
General default limitations and maximum limitations of Azure subscriptions can be found in this
article.
Possible Scenarios
SAP is often seen as one of the most mission-critical applications within enterprises. The architecture
and operations of these applications is mostly complex and ensuring that you meet requirements on
availability and performance is important.
Thus enterprises have to think carefully about which cloud provider to choose for running such
business critical business processes on. Azure is the ideal public cloud platform for business critical
SAP applications and business processes. Given the wide variety of Azure infrastructure, nearly all
existing SAP NetWeaver, and S/4HANA systems can be hosted in Azure today. Azure provides VMs
with many Terabytes of memory and more than 200 CPUs. Beyond that Azure offers HANA Large
Instances, which allow scale-up HANA deployments of up to 24 TB and SAP HANA scale-out
deployments of up to 120 TB. One can state today that nearly all on-premise SAP scenarios can be run
in Azure as well.
For a rough description of the scenarios and some non-supported scenarios, see the document SAP
workload on Azure virtual machine supported scenarios.
Check these scenarios and some of the conditions that were named as not supported in the referenced
documentation throughout the planning and the development of your architecture that you want to
deploy into Azure.
Overall the most common deployment pattern is a cross-premises scenario like displayed
Reason for many customers to apply a cross-premises deployment pattern is that fact that it is most
transparent for all applications to extend on-premises into Azure using Azure ExpressRoute and treat
Azure as virtual datacenter. As more and more assets are getting moved into Azure, the Azure
deployed infrastructure and network infrastructure will grow and the on-premises assets will reduce
accordingly. Everything transparent to users and applications.
In order to successfully deploy SAP systems into either Azure IaaS or IaaS in general, it is important to
understand the significant differences between the offerings of traditional outsourcers or hosters and
IaaS offerings. Whereas the traditional hoster or outsourcer adapts infrastructure (network, storage
and server type) to the workload a customer wants to host, it is instead the customer's or partner's
responsibility to characterize the workload and choose the correct Azure components of VMs, storage,
and network for IaaS deployments.
In order to gather data for the planning of your deployment into Azure, it is important to:
Evaluate what SAP products are supported running in Azure VMs
Evaluate what specific Operating System releases are supported with specific Azure VMs for those
SAP products
Evaluate what DBMS releases are supported for your SAP products with specific Azure VMs
Evaluate whether some of the required OS/DBMS releases require you to perform SAP release,
Support Package upgrade, and kernel upgrades to get to a supported configuration
Evaluate whether you need to move to different operating systems in order to deploy on Azure.
Details on supported SAP components on Azure, supported Azure infrastructure units and related
operating system releases and DBMS releases are explained in the article What SAP software is
supported for Azure deployments. Results gained out of the evaluation of valid SAP releases, operating
system, and DBMS releases have a large impact on the efforts moving SAP systems to Azure. Results
out of this evaluation are going to define whether there could be significant preparation efforts in
cases where SAP release upgrades or changes of operating systems are needed.
Azure Regions
Microsoft's Azure services are collected in Azure regions. An Azure region is a one or a collection out of
datacenters that contain the hardware and infrastructure that runs and hosts the different Azure
services. This infrastructure includes a large number of nodes that function as compute nodes or
storage nodes, or run network functionality.
For a list of the different Azure regions, check the article Azure geographies. Not all the Azure regions
offer the same services. Dependent on the SAP product you want to run, and the operating system and
DBMS related to it, you can end up in a situation that a certain region does not offer the VM types you
require. This is especially true for running SAP HANA, where you usually need VMs of the M/Mv2 VM-
series. These VM families are deployed only in a subset of the regions. You can find out what exact VM,
types, Azure storage types or, other Azure Services are available in which of the regions with the help
of the site Products available by region. As you start your planning and have certain regions in mind as
primary region and eventually secondary region, you need to investigate first whether the necessary
services are available in those regions.
Availability Zones
Several of the Azure regions implemented a concept called Availability Zones. Availability Zones are
physically separate locations within an Azure region. Each Availability Zone is made up of one or more
datacenters equipped with independent power, cooling, and networking. For example, deploying two
VMs across two Availability Zones of Azure, and implementing a high-availability framework for your
SAP DBMS system or the SAP Central Services gives you the best SLA in Azure. For this particular
virtual machine SLA in Azure, check the latest version of Virtual Machine SLAs. Since Azure regions
developed and extended rapidly over the last years, the topology of the Azure regions, the number of
physical datacenters, the distance among those datacenters, and the distance between Azure
Availability Zones can be different. And with that the network latency.
The principle of Availability Zones does not apply to the HANA specific service of HANA Large
Instances. Service Level agreements for HANA Large Instances can be found in the article SLA for SAP
HANA on Azure Large Instances
Fault Domains
Fault Domains represent a physical unit of failure, closely related to the physical infrastructure
contained in data centers, and while a physical blade or rack can be considered a Fault Domain, there is
no direct one-to-one mapping between the two.
When you deploy multiple Virtual Machines as part of one SAP system in Microsoft Azure Virtual
Machine Services, you can influence the Azure Fabric Controller to deploy your application into
different Fault Domains, thereby meeting higher requirements of availability SLAs. However, the
distribution of Fault Domains over an Azure Scale Unit (collection of hundreds of Compute nodes or
Storage nodes and networking) or the assignment of VMs to a specific Fault Domain is something over
which you do not have direct control. In order to direct the Azure fabric controller to deploy a set of
VMs over different Fault Domains, you need to assign an Azure availability set to the VMs at
deployment time. For more information on Azure availability sets, see chapter Azure availability sets in
this document.
Upgrade Domains
Upgrade Domains represent a logical unit that helps to determine how a VM within an SAP system,
that consists of SAP instances running in multiple VMs, is updated. When an upgrade occurs, Microsoft
Azure goes through the process of updating these Upgrade Domains one by one. By spreading VMs at
deployment time over different Upgrade Domains, you can protect your SAP system partly from
potential downtime. In order to force Azure to deploy the VMs of an SAP system spread over different
Upgrade Domains, you need to set a specific attribute at deployment time of each VM. Similar to Fault
Domains, an Azure Scale Unit is divided into multiple Upgrade Domains. In order to direct the Azure
fabric controller to deploy a set of VMs over different Upgrade Domains, you need to assign an Azure
Availability Set to the VMs at deployment time. For more information on Azure availability sets, see
chapter Azure availability sets below.
Azure availability sets
Azure Virtual Machines within one Azure availability set are distributed by the Azure Fabric Controller
over different Fault and Upgrade Domains. The purpose of the distribution over different Fault and
Upgrade Domains is to prevent all VMs of an SAP system from being shut down in the case of
infrastructure maintenance or a failure within one Fault Domain. By default, VMs are not part of an
availability set. The participation of a VM in an availability set is defined at deployment time or later on
by a reconfiguration and redeployment of a VM.
To understand the concept of Azure availability sets and the way availability sets relate to Fault and
Upgrade Domains, read this article.
As you define availability sets and try to mix various VMs of different VM families within one
availability set, you may encounter problems that prevent you to include a certain VM type into such
an availability set. The reason is that the availability set is bound to scale unit that contains a certain
type of compute hosts. And a certain type of compute host can only run certain types of VM families.
For example, if you create an availability set and deploy the first VM into the availability set and you
choose a VM type of the Esv3 family and then you try to deploy as second VM a VM of the M family,
you will be rejected in the second allocation. Reason is that the Esv3 family VMs are not running on the
same host hardware as the virtual machines of the M family do. The same problem can occur, when
you try to resize VMs and try to move a VM out of the Esv3 family to a VM type of the M family. In the
case of resizing to a VM family that can't be hosted on the same host hardware, you need to shut down
all VMs in your availability set and resize them to be able to run on the other host machine type. For
SLAs of VMs that are deployed within availability set, check the article Virtual Machine SLAs.
The principle of availability set and related update and fault domain does not apply to the HANA
specific service of HANA Large Instances. Service Level agreements for HANA Large Instances can be
found in the article SLA for SAP HANA on Azure Large Instances.
IMPORTANT
The concepts of Azure Availability Zones and Azure availability sets are mutually exclusive. That means, you can
either deploy a pair or multiple VMs into a specific Availability Zone or an Azure availability set. But not both.
With Azure virtual machines, Microsoft is enabling you to deploy custom server images to Azure as
IaaS instances. Or you are able to choose from a rich selection of consumable operating system images
out of the Azure image gallery.
From an operational perspective, the Azure Virtual Machine Service offers similar experiences as
virtual machines deployed on premises. You are responsible for the administration, operations and
also the patching of the particular operating system, running in an Azure VM and its applications in
that VM. Microsoft is not providing any more services beyond hosting that VM on its Azure
infrastructure (Infrastructure as a Service - IaaS). For SAP workload that you as a customer deploy,
Microsoft has no offers beyond the IaaS offerings.
The Microsoft Azure platform is a multi-tenant platform. As a result storage, network, and compute
resources that host Azure VMs are, with a few exceptions, shared between tenants. Intelligent throttling
and quota logic is used to prevent one tenant from impacting the performance of another tenant
(noisy neighbor) in a drastic way. Especially for certifying the Azure platform for SAP HANA, Microsoft
needs to prove the resource isolation for cases where multiple VMs can run on the same host on a
regular basis to SAP. Though logic in Azure tries to keep variances in bandwidth experienced small,
highly shared platforms tend to introduce larger variances in resource/bandwidth availability than
customers might experience in their on-premises deployments. The probability that an SAP system on
Azure could experience larger variances than in an on-premises system needs to be taken into account.
Azure virtual machines for SAP workload
For SAP workload, we narrowed down the selection to different VM families that are suitable for SAP
workload and SAP HANA workload more specifically. The way how you find the correct VM type and its
capability to work through SAP workload is described in the document What SAP software is
supported for Azure deployments.
NOTE
The VM types that are certified for SAP workload, there is no over-provisioning of CPU and memory resources.
Beyond the selection of purely supported VM types, you also need to check whether those VM types
are available in a specific region based on the site Products available by region. But more important,
you need to evaluate whether:
CPU and memory resources of different VM types
IOPS bandwidth of different VM types
Network capabilities of different VM types
Number of disks that can be attached
Ability to leverage certain Azure storage types
fit your need. Most of that data can be found here (Linux) and here (Windows) for a particular VM type.
As pricing model you have several different pricing options that list like:
Pay as you go
One year reserved
Three years reserved
Spot pricing
The pricing of each of the different offers with different service offers around operating systems and
different regions is available on the site Linux Virtual Machines Pricing and Windows Virtual Machines
Pricing. For details and flexibility of one year and three year reserved instances, check these articles:
What are Azure Reservations?
Virtual machine size flexibility with Reserved VM Instances
How the Azure reservation discount is applied to virtual machines
For more information on spot pricing, read the article Azure Spot Virtual Machines. Pricing of the same
VM type can also be different between different Azure regions. For some customers, it was worth to
deploy into a less expensive Azure region.
Additionally, Azure offers the concepts of a dedicated host. The dedicated host concept gives you more
control on patching cycles that are done by Azure. You can time the patching according to your own
schedules. This offer is specifically targeting customers with workload that might not follow the normal
cycle of workload. To read up on the concepts of Azure dedicated host offers, read the article Azure
Dedicated Host. Using this offer is supported for SAP workload and is used by several SAP customers
who want to have more control on patching of infrastructure and eventual maintenance plans of
Microsoft. For more information on how Microsoft maintains and patches the Azure infrastructure that
hosts virtual machines, read the article Maintenance for virtual machines in Azure.
Generation 1 and Generation 2 virtual machines
Microsoft's hypervisor is able to handle two different generations of virtual machines. Those formats
are called Generation 1 and Generation 2 . Generation 2 was introduced in the year 2012 with
Windows Server 2012 hypervisor. Azure started out using Generation 1 virtual machines. As you
deploy Azure virtual machines, the default is still to use the Generation 1 format. Meanwhile you can
deploy Generation 2 VM formats as well. The article Support for generation 2 VMs on Azure lists the
Azure VM families that can be deployed as Generation 2 VM. This article also lists the important
functional differences of Generation 2 virtual machines as they can run on Hyper-V private cloud and
Azure. More important this article also lists functional differences between Generation 1 virtual
machines and Generation 2 VMs, as those run in Azure.
NOTE
There are functional differences of Generation 1 and Generation 2 VMs running in Azure. Read the article
Support for generation 2 VMs on Azure to see a list of those differences.
Moving an existing VM from one generation to the other generation is not possible. To change the
virtual machine generation, you need to deploy a new VM of the generation you desire and re-install
the software that you are running in the virtual machine of the generation. This change only affects the
base VHD image of the VM and has no impact on the data disks or attached NFS or SMB shares. Data
disks, NFS, or SMB shares that originally were assigned to, for example, on a Generation 1 VM.
NOTE
Deploying Mv1 VM family VMs as Generation 2 VMs is possible as of beginning of May 2020. With that a
seeming less up and downsizing between Mv1 and Mv2 family VMs is possible.
Windows
Drive D:\ in an Azure VM is a non-persisted drive, which is backed by some local disks on the Azure
compute node. Because it is non-persisted, this means that any changes made to the content on
the D:\ drive is lost when the VM is rebooted. By "any changes", like files stored, directories created,
applications installed, etc.
Linux
Linux Azure VMs automatically mount a drive at /mnt/resource that is a non-persisted drive
backed by local disks on the Azure compute node. Because it is non-persisted, this means that any
changes made to content in /mnt/resource are lost when the VM is rebooted. By any changes, like
files stored, directories created, applications installed, etc.
The string above needs to uniquely identify the disk/VHD that is stored on Azure Storage.
Azure persisted storage types
Azure offers a variety of persisted storage option that can be used for SAP workload and specific SAP
stack components. For more details, read the document Azure storage for SAP workloads.
Microsoft Azure Networking
Microsoft Azure provides a network infrastructure, which allows the mapping of all scenarios, which
we want to realize with SAP software. The capabilities are:
Access from the outside, directly to the VMs via Windows Terminal Services or ssh/VNC
Access to services and specific ports used by applications within the VMs
Internal Communication and Name Resolution between a group of VMs deployed as Azure VMs
Cross-premises Connectivity between a customer's on-premises network and the Azure network
Cross Azure Region or data center connectivity between Azure sites
More information can be found here: https://azure.microsoft.com/documentation/services/virtual-
network/
There are many different possibilities to configure name and IP resolution in Azure. There is also an
Azure DNS service, which can be used instead of setting up your own DNS server. More information
can be found in this article and on this page.
For cross-premises or hybrid scenarios, we are relying on the fact that the on-premises
AD/OpenLDAP/DNS has been extended via VPN or private connection to Azure. For certain scenarios
as documented here, it might be necessary to have an AD/OpenLDAP replica installed in Azure.
Because networking and name resolution is a vital part of the database deployment for an SAP system,
this concept is discussed in more detail in the DBMS Deployment Guide.
A z u r e Vi r t u a l N e t w o r k s
By building up an Azure Virtual Network, you can define the address range of the private IP addresses
allocated by Azure DHCP functionality. In cross-premises scenarios, the IP address range defined is still
allocated using DHCP by Azure. However, Domain Name resolution is done on-premises (assuming
that the VMs are a part of an on-premises domain) and hence can resolve addresses beyond different
Azure Cloud Services.
Every Virtual Machine in Azure needs to be connected to a Virtual Network.
More details can be found in this article and on this page.
NOTE
By default, once a VM is deployed you cannot change the Virtual Network configuration. The TCP/IP settings
must be left to the Azure DHCP server. Default behavior is Dynamic IP assignment.
The MAC address of the virtual network card may change, for example after resize and the Windows or
Linux guest OS picks up the new network card and automatically uses DHCP to assign the IP and DNS
addresses in this case.
St a t i c I P A ssi g n m e n t
It is possible to assign fixed or reserved IP addresses to VMs within an Azure Virtual Network. Running
the VMs in an Azure Virtual Network opens a great possibility to leverage this functionality if needed
or required for some scenarios. The IP assignment remains valid throughout the existence of the VM,
independent of whether the VM is running or shutdown. As a result, you need to take the overall
number of VMs (running and stopped VMs) into account when defining the range of IP addresses for
the Virtual Network. The IP address remains assigned either until the VM and its Network Interface is
deleted or until the IP address gets de-assigned again. For more information, read this article.
NOTE
You should assign static IP addresses through Azure means to individual vNICs. You should not assign static IP
addresses within the guest OS to a vNIC. Some Azure services like Azure Backup Service rely on the fact that at
least the primary vNIC is set to DHCP and not to static IP addresses. See also the document Troubleshoot
Azure virtual machine backup.
Mu l t i pl e N ICs per VM
You can define multiple virtual network interface cards (vNIC) for an Azure Virtual Machine. With the
ability to have multiple vNICs you can start to set up network traffic separation where, for example,
client traffic is routed through one vNIC and backend traffic is routed through a second vNIC.
Dependent on the type of VM there are different limitations for the number of vNICs a VM can have
assigned. Exact details, functionality, and restrictions can be found in these articles:
Create a Windows VM with multiple NICs
Create a Linux VM with multiple NICs
Deploy multi NIC VMs using a template
Deploy multi NIC VMs using PowerShell
Deploy multi NIC VMs using the Azure CLI
Site-to-Site Connectivity
Cross-premises is Azure VMs and On-Premises linked with a transparent and permanent VPN
connection. It is expected to become the most common SAP deployment pattern in Azure. The
assumption is that operational procedures and processes with SAP instances in Azure should work
transparently. This means you should be able to print out of these systems as well as use the SAP
Transport Management System (TMS) to transport changes from a development system in Azure to a
test system, which is deployed on-premises. More documentation around site-to-site can be found in
this article
VP N T u n n el Devi c e
In order to create a site-to-site connection (on-premises data center to Azure data center), you need to
either obtain and configure a VPN device, or use Routing and Remote Access Service (RRAS) which
was introduced as a software component with Windows Server 2012.
Create a virtual network with a site-to-site VPN connection using PowerShell
About VPN devices for Site-to-Site VPN Gateway connections
VPN Gateway FAQ
The Figure above shows two Azure subscriptions have IP address subranges reserved for usage in
Virtual Networks in Azure. The connectivity from the on-premises network to Azure is established via
VPN.
Point-to-Site VPN
Point-to-site VPN requires every client machine to connect with its own VPN into Azure. For the SAP
scenarios, we are looking at, point-to-site connectivity is not practical. Therefore, no further references
are given to point-to-site VPN connectivity.
More information can be found here
Configure a Point-to-Site connection to a VNet using the Azure portal
Configure a Point-to-Site connection to a VNet using PowerShell
Multi-Site VPN
Azure also nowadays offers the possibility to create Multi-Site VPN connectivity for one Azure
subscription. Previously a single subscription was limited to one site-to-site VPN connection. This
limitation went away with Multi-Site VPN connections for a single subscription. This makes it possible
to leverage more than one Azure Region for a specific subscription through cross-premises
configurations.
For more documentation, see this article
VNet to VNet Connection
Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions.
However often you have the requirement that the software components in the different regions should
communicate with each other. Ideally this communication should not be routed from one Azure Region
to on-premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to
configure a connection from one Azure Virtual Network in one region to another Azure Virtual
Network hosted in another region. This functionality is called VNet-to-VNet connection. More details
on this functionality can be found here: https://azure.microsoft.com/documentation/articles/vpn-
gateway-vnet-vnet-rm-ps/.
Private Connection to Azure ExpressRoute
Microsoft Azure ExpressRoute allows the creation of private connections between Azure data centers
and either the customer's on-premises infrastructure or in a co-location environment. ExpressRoute is
offered by various MPLS (packet switched) VPN providers or other Network Service Providers.
ExpressRoute connections do not go over the public Internet. ExpressRoute connections offer higher
security, more reliability through multiple parallel circuits, faster speeds, and lower latencies than
typical connections over the Internet.
Find more details on Azure ExpressRoute and offerings here:
https://azure.microsoft.com/documentation/services/expressroute/
https://azure.microsoft.com/pricing/details/expressroute/
https://azure.microsoft.com/documentation/articles/expressroute-faqs/
Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented
here
https://azure.microsoft.com/documentation/articles/expressroute-howto-linkvnet-arm/
https://azure.microsoft.com/documentation/articles/expressroute-howto-circuit-arm/
Forced tunneling in case of cross-premises
For VMs joining on-premises domains through site-to-site, point-to-site, or ExpressRoute, you need to
make sure that the Internet proxy settings are getting deployed for all the users in those VMs as well.
By default, software running in those VMs or users using a browser to access the internet would not go
through the company proxy, but would connect straight through Azure to the internet. But even the
proxy setting is not a 100% solution to direct the traffic through the company proxy since it is
responsibility of software and services to check for the proxy. If software running in the VM is not
doing that or an administrator manipulates the settings, traffic to the Internet can be detoured again
directly through Azure to the Internet.
In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-
site connectivity between on-premises and Azure. The detailed description of the Forced Tunneling
feature is published here https://azure.microsoft.com/documentation/articles/vpn-gateway-forced-
tunneling-rm/
Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the
ExpressRoute BGP peering sessions.
Summary of Azure networking
This chapter contained many important points about Azure Networking. Here is a summary of the
main points:
Azure Virtual Networks allow you to put a network structure into your Azure deployment. VNets
can be isolated against each other or with the help of Network Security Groups traffic between
VNets can be controlled.
Azure Virtual Networks can be leveraged to assign IP address ranges to VMs or assign fixed IP
addresses to VMs
To set up a Site-To-Site or Point-To-Site connection you need to create an Azure Virtual Network first
Once a virtual machine has been deployed, it is no longer possible to change the Virtual Network
assigned to the VM
Quotas in Azure virtual machine services
We need to be clear about the fact that the storage and network infrastructure is shared between VMs
running a variety of services in the Azure infrastructure. As in the customer's own data centers, over-
provisioning of some of the infrastructure resources does take place to a degree. The Microsoft Azure
Platform uses disk, CPU, network, and other quotas to limit the resource consumption and to preserve
consistent and deterministic performance. The different VM types (A5, A6, etc.) have different quotas
for the number of disks, CPU, RAM, and Network.
NOTE
CPU and memory resources of the VM types supported by SAP are pre-allocated on the host nodes. This
means that once the VM is deployed, the resources on the host are available as defined by the VM type.
When planning and sizing SAP on Azure solutions, the quotas for each virtual machine size must be
considered. The VM quotas are described here (Linux) and here (Windows).
The quotas described represent the theoretical maximum values. The limit of IOPS per disk may be
achieved with small I/Os (8 KB) but possibly may not be achieved with large I/Os (1 MB). The IOPS limit
is enforced on the granularity of single disk.
As a rough decision tree to decide whether an SAP system fits into Azure Virtual Machine Services and
its capabilities or whether an existing system needs to be configured differently in order to deploy the
system on Azure, the decision tree below can be used:
1. The most important information to start with is the SAPS requirement for a given SAP system. The
SAPS requirements need to be separated out into the DBMS part and the SAP application part, even
if the SAP system is already deployed on-premises in a 2-tier configuration. For existing systems,
the SAPS related to the hardware in use often can be determined or estimated based on existing
SAP benchmarks. The results can be found here. For newly deployed SAP systems, you should have
gone through a sizing exercise, which should determine the SAPS requirements of the system.
2. For existing systems, the I/O volume and I/O operations per second on the DBMS server should be
measured. For newly planned systems, the sizing exercise for the new system also should give
rough ideas of the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a
Proof of Concept.
3. Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure
can provide. The information on SAPS of the different Azure VM types is documented in SAP Note
1928533. The focus should be on the DBMS VM first since the database layer is the layer in an SAP
NetWeaver system that does not scale out in the majority of deployments. In contrast, the SAP
application layer can be scaled out. If none of the SAP supported Azure VM types can deliver the
required SAPS, the workload of the planned SAP system can't be run on Azure. You either need to
deploy the system on-premises or you need to change the workload volume for the system.
4. As documented here (Linux) and here (Windows), Azure enforces an IOPS quota per disk
independent whether you use Standard Storage or Premium Storage. Dependent on the VM type,
the number of data disks, which can be mounted varies. As a result, you can calculate a maximum
IOPS number that can be achieved with each of the different VM types. Dependent on the database
file layout, you can stripe disks to become one volume in the guest OS. However, if the current IOPS
volume of a deployed SAP system exceeds the calculated limits of the largest VM type of Azure and
if there is no chance to compensate with more memory, the workload of the SAP system can be
impacted severely. In such cases, you can hit a point where you should not deploy the system on
Azure.
5. Especially in SAP systems, which are deployed on-premises in 2-Tier configurations, the chances are
that the system might need to be configured on Azure in a 3-Tier configuration. In this step, you
need to check whether there is a component in the SAP application layer, which can't be scaled out
and which would not fit into the CPU and memory resources the different Azure VM types offer. If
there indeed is such a component, the SAP system and its workload can't be deployed into Azure.
But if you can scale out the SAP application components into multiple Azure VMs, the system can be
deployed into Azure.
If the DBMS and SAP application layer components can be run in Azure VMs, the configuration needs
to be defined with regard to:
Number of Azure VMs
VM types for the individual components
Number of VHDs in DBMS VM to provide enough IOPS
Administration and configuration tasks for the Virtual Machine instance are possible from within the
Azure portal.
Besides restarting and shutting down a Virtual Machine you can also attach, detach, and create data
disks for the Virtual Machine instance, to capture the instance for image preparation, and configure the
size of the Virtual Machine instance.
The Azure portal provides basic functionality to deploy and configure VMs and many other Azure
services. However not all available functionality is covered by the Azure portal. In the Azure portal, it's
not possible to perform tasks like:
Uploading VHDs to Azure
Copying VMs
Management via Microsoft Azure PowerShell cmdlets
Windows PowerShell is a powerful and extensible framework that has been widely adopted by
customers deploying larger numbers of systems in Azure. After the installation of PowerShell cmdlets
on a desktop, laptop or dedicated management station, the PowerShell cmdlets can be run remotely.
The process to enable a local desktop/laptop for the usage of Azure PowerShell cmdlets and how to
configure those for the usage with the Azure subscription(s) is described in this article.
More detailed steps on how to install, update, and configure the Azure PowerShell cmdlets can also be
found in this chapter of the Deployment Guide.
Customer experience so far has been that PowerShell (PS) is certainly the more powerful tool to deploy
VMs and to create custom steps in the deployment of VMs. All of the customers running SAP instances
in Azure are using PS cmdlets to supplement management tasks they do in the Azure portal or are
even using PS cmdlets exclusively to manage their deployments in Azure. Since the Azure-specific
cmdlets share the same naming convention as the more than 2000 Windows-related cmdlets, it is an
easy task for Windows administrators to leverage those cmdlets.
See example here: https://blogs.technet.com/b/keithmayer/archive/2015/07/07/18-steps-for-end-to-
end-iaas-provisioning-in-the-cloud-with-azure-resource-manager-arm-powershell-and-desired-state-
configuration-dsc.aspx
Deployment of the Azure Extension for SAP (see chapter Azure Extension for SAP in this document) is
only possible via PowerShell or CLI. Therefore it is mandatory to set up and configure PowerShell or
CLI when deploying or administering an SAP NetWeaver system in Azure.
As Azure provides more functionality, new PS cmdlets are going to be added that requires an update of
the cmdlets. Therefore it makes sense to check the Azure Download site at least once the month
https://azure.microsoft.com/downloads/ for a new version of the cmdlets. The new version is installed
on top of the older version.
For a general list of Azure-related PowerShell commands check here: /powershell/azure/.
Management via Microsoft Azure CLI commands
For customers who use Linux and want to manage Azure resources PowerShell might not be an option.
Microsoft offers Azure CLI as an alternative. The Azure CLI provides a set of open source, cross-
platform commands for working with the Azure Platform. The Azure CLI provides much of the same
functionality found in the Azure portal.
For information about installation, configuration and how to use CLI commands to accomplish Azure
tasks see
Install the Azure classic CLI
[Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure
CLI][../../linux/create-ssh-secured-vm-from-template.md]
Use the Azure classic CLI for Mac, Linux, and Windows with Azure Resource Manager
Also read chapter Azure CLI for Linux VMs in the Deployment Guide on how to use Azure CLI to deploy
the Azure Extension for SAP.
Windows
See more details here: /azure/virtual-machines/windows/upload-generalized-managed The
Windows settings (like Windows SID and hostname) must be abstracted/generalized on the on-
premises VM via the sysprep command.
Linux
Follow the steps described in these articles for SUSE, Red Hat, or Oracle Linux, to prepare a VHD to
be uploaded to Azure.
If you have already installed SAP content in your on-premises VM (especially for 2-Tier systems), you
can adapt the SAP system settings after the deployment of the Azure VM through the instance rename
procedure supported by the SAP Software Provisioning Manager (SAP Note 1619720). See chapters
Preparation for deploying a VM with a customer-specific image for SAP and Uploading a VHD from
on-premises to Azure of this document for on-premises preparation steps and upload of a generalized
VM to Azure. Read chapter Scenario 2: Deploying a VM with a custom image for SAP in the
Deployment Guide for detailed steps of deploying such an image in Azure.
Deploying a VM out of the Azure Marketplace
You would like to use a Microsoft or third-party provided VM image from the Azure Marketplace to
deploy your VM. After you deployed your VM in Azure, you follow the same guidelines and tools to
install the SAP software and/or DBMS inside your VM as you would do in an on-premises
environment. For more detailed deployment description, see chapter Scenario 1: Deploying a VM out
of the Azure Marketplace for SAP in the Deployment Guide.
Preparing VMs with SAP for Azure
Before uploading VMs into Azure, you need to make sure the VMs and VHDs fulfill certain
requirements. There are small differences depending on the deployment method that is used.
Preparation for moving a VM from on-premises to Azure with a non-generalized disk
A common deployment method is to move an existing VM, which runs an SAP system from on-
premises to Azure. That VM and the SAP system in the VM just should run in Azure using the same
hostname and likely the same SAP SID. In this case, the guest OS of VM should not be generalized for
multiple deployments. If the on-premises network got extended into Azure, then even the same
domain accounts can be used within the VM as those were used before on-premises.
Requirements when preparing your own Azure VM Disk are:
Originally the VHD containing the operating system could have a maximum size of 127 GB only.
This limitation got eliminated at the end of March 2015. Now the VHD containing the operating
system can be up to 1 TB in size as any other Azure Storage hosted VHD as well.
It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet
supported on Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD
with PowerShell commandlets or CLI
VHDs, which are mounted to the VM and should be mounted again in Azure to the VM need to be
in a fixed VHD format as well. Read this article for size limits of data disks. Dynamic VHDs will be
converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
Add another local account with administrator privileges, which can be used by Microsoft support or
which can be assigned as context for services and applications to run in until the VM is deployed
and more appropriate users can be used.
Add other local accounts as those might be needed for the specific deployment scenario.
Windows
In this scenario no generalization (sysprep) of the VM is required to upload and deploy the VM on
Azure. Make sure that drive D:\ is not used. Set disk automount for attached disks as described in
chapter Setting automount for attached disks in this document.
Linux
In this scenario no generalization (waagent -deprovision) of the VM is required to upload and
deploy the VM on Azure. Make sure that /mnt/resource is not used and that ALL disks are mounted
via uuid. For the OS disk, make sure that the bootloader entry also reflects the uuid-based mount.
Windows
Make sure that drive D:\ is not used Set disk automount for attached disks as described in chapter
Setting automount for attached disks in this document.
Linux
Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the OS
disk, make sure the bootloader entry also reflects the uuid-based mount.
SAP GUI (for administrative and setup purposes) can be pre-installed in such a template.
Other software necessary to run the VMs successfully in cross-premises scenarios can be installed
as long as this software can work with the rename of the VM.
If the VM is prepared sufficiently to be generic and eventually independent of accounts/users not
available in the targeted Azure deployment scenario, the last preparation step of generalizing such an
image is conducted.
Gen er al i z i n g a VM
Windows
The last step is to sign in to a VM with an Administrator account. Open a Windows command
window as administrator. Go to %windir%\windows\system32\sysprep and execute sysprep.exe. A
small window will appear. It is important to check the Generalize option (the default is unchecked)
and change the Shutdown Option from its default of 'Reboot' to 'shutdown'. This procedure
assumes that the sysprep process is executed on-premises in the Guest OS of a VM. If you want to
perform the procedure with a VM already running in Azure, follow the steps described in this
article.
Linux
How to capture a Linux virtual machine to use as a Resource Manager template
In this case we want to upload a VHD, either with or without an OS in it, and mount it to a VM as a data
disk or use it as OS disk. This is a multi-step process
PowerShell
Sign in to your subscription with Connect-AzAccount
Set the subscription of your context with Set-AzContext and parameter SubscriptionId or
SubscriptionName - see /powershell/module/az.accounts/set-Azcontext
Upload the VHD with Add-AzVhd to an Azure Storage Account - see
/powershell/module/az.compute/add-Azvhd
(Optional) Create a Managed Disk from the VHD with New-AzDisk - see
/powershell/module/az.compute/new-Azdisk
Set the OS disk of a new VM config to the VHD or Managed Disk with Set-AzVMOSDisk - see
/powershell/module/az.compute/set-Azvmosdisk
Create a new VM from the VM config with New-AzVM - see /powershell/module/az.compute/new-
Azvm
Add a data disk to a new VM with Add-AzVMDataDisk - see /powershell/module/az.compute/add-
Azvmdatadisk
Azure CLI
Sign in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk from the VHD with az disk create - see
https://docs.microsoft.com/cli/azure/disk
Create a new VM specifying the uploaded VHD or Managed Disk as OS disk with az vm create and
parameter --attach-os-disk
Add a data disk to a new VM with az vm disk attach and parameter --new
Template
Upload the VHD with PowerShell or Azure CLI
(Optional) Create a Managed Disk from the VHD with PowerShell, Azure CLI, or the Azure portal
Deploy the VM with a JSON template referencing the VHD as shown in this example JSON template
or using Managed Disks as shown in this example JSON template.
Deployment of a VM Image
To upload an existing VM or VHD from the on-premises network, in order to use it as an Azure VM
image such a VM or VHD need to meet the requirements listed in chapter Preparation for deploying a
VM with a customer-specific image for SAP of this document.
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep
Technical Reference for Windows or How to capture a Linux virtual machine to use as a Resource
Manager template for Linux
Sign in to your subscription with Connect-AzAccount
Set the subscription of your context with Set-AzContext and parameter SubscriptionId or
SubscriptionName - see /powershell/module/az.accounts/set-Azcontext
Upload the VHD with Add-AzVhd to an Azure Storage Account - see
/powershell/module/az.compute/add-Azvhd
(Optional) Create a Managed Disk Image from the VHD with New-AzImage - see
/powershell/module/az.compute/new-Azimage
Set the OS disk of a new VM config to the
VHD with Set-AzVMOSDisk -SourceImageUri -CreateOption fromImage - see
/powershell/module/az.compute/set-Azvmosdisk
Managed Disk Image Set-AzVMSourceImage - see /powershell/module/az.compute/set-
Azvmsourceimage
Create a new VM from the VM config with New-AzVM - see /powershell/module/az.compute/new-
Azvm
Azure CLI
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep
Technical Reference for Windows or How to capture a Linux virtual machine to use as a Resource
Manager template for Linux
Sign in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk Image from the VHD with az image create - see
https://docs.microsoft.com/cli/azure/image
Create a new VM specifying the uploaded VHD or Managed Disk Image as OS disk with az vm
create and parameter --image
Template
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep
Technical Reference for Windows or How to capture a Linux virtual machine to use as a Resource
Manager template for Linux
Upload the VHD with PowerShell or Azure CLI
(Optional) Create a Managed Disk Image from the VHD with PowerShell, Azure CLI, or the Azure
portal
Deploy the VM with a JSON template referencing the image VHD as shown in this example JSON
template or using the Managed Disk Image as shown in this example JSON template.
Downloading VHDs or Managed Disks to on-premises
Azure Infrastructure as a Service is not a one-way street of only being able to upload VHDs and SAP
systems. You can move SAP systems from Azure back into the on-premises world as well.
During the time of the download the VHDs or Managed Disks can't be active. Even when downloading
disks, which are mounted to VMs, the VM needs to be shut down and deallocated. If you only want to
download the database content, which, then should be used to set up a new system on-premises and if
it is acceptable that during the time of the download and the setup of the new system that the system
in Azure can still be operational, you could avoid a long downtime by performing a compressed
database backup into a disk and just download that disk instead of also downloading the OS base VM.
PowerShell
Downloading a Managed Disk You first need to get access to the underlying blob of the
Managed Disk. Then you can copy the underlying blob to a new storage account and download
the blob from this storage account.
Downloading a VHD Once the SAP system is stopped and the VM is shut down, you can use the
PowerShell cmdlet Save-AzVhd on the on-premises target to download the VHD disks back to
the on-premises world. In order to do that, you need the URL of the VHD, which you can find in
the 'storage Section' of the Azure portal (need to navigate to the Storage Account and the
storage container where the VHD was created) and you need to know where the VHD should be
copied to.
Then you can leverage the command by defining the parameter SourceUri as the URL of the
VHD to download and the LocalFilePath as the physical location of the VHD (including its name).
The command could look like:
Save-AzVhd -ResourceGroupName <resource group name of storage account> -SourceUri
http://<storage account name>.blob.core.windows.net/<container name>/sapidedata.vhd -
LocalFilePath E:\Azure_downloads\sapidesdata.vhd
Downloading a VHD Once the SAP system is stopped and the VM is shut down, you can use the
Azure CLI command _azure storage blob download_ on the on-premises target to download the
VHD disks back to the on-premises world. In order to do that, you need the name and the
container of the VHD, which you can find in the 'Storage Section' of the Azure portal (need to
navigate to the Storage Account and the storage container where the VHD was created) and you
need to know where the VHD should be copied to.
Then you can leverage the command by defining the parameters blob and container of the VHD
to download and the destination as the physical target location of the VHD (including its name).
The command could look like:
az storage blob download --name <name of the VHD to download> --container-name <container of
the VHD to download> --account-name <storage account name of the VHD to download> --account-
key <storage account key> --file <destination of the VHD to download>
Data disks can also be Managed Disks. In this case, the Managed Disk is used to create a new Managed
Disk before being attached to the virtual machine. The name of the Managed Disk must be unique
within a resource group.
P o w e r Sh e l l
You can use Azure PowerShell cmdlets to copy a VHD as shown in this article. To create a new Managed
Disk, use New-AzDiskConfig and New-AzDisk as shown in the following example.
Azure CLI
You can use Azure CLI to copy a VHD. To create a new Managed Disk, use az disk create as shown in the
following example.
A z u r e St o r a g e t o o l s
https://storageexplorer.com/
Professional editions of Azure Storage Explorers can be found here:
https://www.cerebrata.com/
https://clumsyleaf.com/products/cloudxplorer
The copy of a VHD itself within a storage account is a process, which takes only a few seconds (similar
to SAN hardware creating snapshots with lazy copy and copy on write). After you have a copy of the
VHD file, you can attach it to a virtual machine or use it as an image to attach copies of the VHD to
virtual machines.
P o w e r Sh e l l
# attach a vhd to a vm
$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzVMDataDisk -VM $vm -Name newdatadisk -VhdUri <path to vhd> -Caching <caching option> -
DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
$vm | Update-AzVM
Azure CLI
# attach a vhd to a vm
az vm unmanaged-disk attach --resource-group <resource group name> --vm-name <vm name> --vhd-uri
<path to vhd>
You can also copy VHDs between subscriptions. For more information, read this article.
The basic flow of the PS cmdlet logic looks like this:
Create a storage account context for the source storage account with New-AzStorageContext - see
/powershell/module/az.storage/new-AzStoragecontext
Create a storage account context for the target storage account with New-AzStorageContext - see
/powershell/module/az.storage/new-AzStoragecontext
Start the copy with
Get-AzStorageBlobCopyState -Blob <target blob name> -Container <target container name> -Context
<variable containing context of target storage account>
az storage blob copy start --source-blob <source blob name> --source-container <source container
name> --source-account-name <source storage account name> --source-account-key <source storage
account key> --destination-container <target container name> --destination-blob <target blob name>
--account-name <target storage account name> --account-key <target storage account name>
az storage blob show --name <target blob name> --container <target container name> --account-name
<target storage account name> --account-key <target storage account name>
Linux
Place the Linux swapfile under /mnt /mnt/resource on Linux as described in this article. The swap
file can be configured in the configuration file of the Linux Agent /etc/waagent.conf. Add or change
the following settings:
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=30720
To activate the changes, you need to restart the Linux Agent with
Read SAP Note 1597355 for more details on the recommended swap file size
The number of disks used for the DBMS data files and the type of Azure Storage these disks are hosted
on should be determined by the IOPS requirements and the latency required. Exact quotas are
described in this article (Linux) and this article (Windows).
Experience of SAP deployments over the last two years taught us some lessons, which can be
summarized as:
IOPS traffic to different data files is not always the same since existing customer systems might
have differently sized data files representing their SAP database(s). As a result it turned out to be
better using a RAID configuration over multiple disks to place the data files LUNs carved out of
those. There were situations, especially with Azure Standard Storage where an IOPS rate hit the
quota of a single disk against the DBMS transaction log. In such scenarios, the use of Premium
Storage is recommended or alternatively aggregating multiple Standard Storage disks with a
software stripe.
Windows
Performance best practices for SQL Server in Azure Virtual Machines
Linux
Configure Software RAID on Linux
Configure LVM on a Linux VM in Azure
Premium Storage is showing significant better performance, especially for critical transaction log
writes. For SAP scenarios that are expected to deliver production like performance, it is highly
recommended to use VM-Series that can leverage Azure Premium Storage.
Keep in mind that the disk, which contains the OS, and as we recommend, the binaries of SAP and the
database (base VM) as well, is not anymore limited to 127 GB. It now can have up to 1 TB in size. This
should be enough space to keep all the necessary file including, for example, SAP batch job logs.
For more suggestions and more details, specifically for DBMS VMs, consult the DBMS Deployment
Guide
Disk Handling
In most scenarios, you need to create additional disks in order to deploy the SAP database into the VM.
We talked about the considerations on number of disks in chapter VM/disk structure for SAP
deployments of this document. The Azure portal allows to attach and detach disks once a base VM is
deployed. The disks can be attached/detached when the VM is up and running as well as when it is
stopped. When attaching a disk, the Azure portal offers to attach an empty disk or an existing disk,
which at this point in time is not attached to another VM.
Note : Disks can only be attached to one VM at any given time.
During the deployment of a new virtual machine, you can decide whether you want to use Managed
Disks or place your disks on Azure Storage Accounts. If you want to use Premium Storage, we
recommend using Managed Disks.
Next, you need to decide whether you want to create a new and empty disk or whether you want to
select an existing disk that was uploaded earlier and should be attached to the VM now.
IMPORTANT : You DO NOT want to use Host Caching with Azure Standard Storage. You should leave
the Host Cache preference at the default of NONE. With Azure Premium Storage, you should enable
Read Caching if the I/O characteristic is mostly read like typical I/O traffic against database data files. In
case of database transaction log file, no caching is recommended.
Windows
How to attach a data disk in the Azure portal
If disks are attached, you need to sign in to the VM to open the Windows Disk Manager. If
automount is not enabled as recommended in chapter Setting automount for attached disks, the
newly attached volume needs to be taken online and initialized.
Linux
If disks are attached, you need to sign in to the VM and initialize the disks as described in this
article
If the new disk is an empty disk, you need to format the disk as well. For formatting, especially for
DBMS data and log files the same recommendations as for bare-metal deployments of the DBMS
apply.
An Azure Storage account does not provide infinite resources in terms of I/O volume, IOPS, and data
volume. Usually DBMS VMs are most affected by this. It might be best to use a separate Storage
Account for each VM if you have few high I/O volume VMs to deploy in order to stay within the limit of
the Azure Storage Account volume. Otherwise, you need to see how you can balance these VMs
between different Storage accounts without hitting the limit of each single Storage Account. More
details are discussed in the DBMS Deployment Guide. You should also keep these limitations in mind
for pure SAP application server VMs or other VMs, which eventually might require additional VHDs.
These restrictions do not apply if you use Managed Disk. If you plan to use Premium Storage, we
recommend using Managed Disk.
Another topic, which is relevant for Storage Accounts is whether the VHDs in a Storage Account are
getting Geo-replicated. Geo-replication is enabled or disabled on the Storage Account level and not on
the VM level. If geo-replication is enabled, the VHDs within the Storage Account would be replicated
into another Azure data center within the same region. Before deciding on this, you should think about
the following restriction:
Azure Geo-replication works locally on each VHD in a VM and does not replicate the I/Os in
chronological order across multiple VHDs in a VM. Therefore, the VHD that represents the base VM as
well as any additional VHDs attached to the VM are replicated independent of each other. This means
there is no synchronization between the changes in the different VHDs. The fact that the I/Os are
replicated independently of the order in which they are written means that geo-replication is not of
value for database servers that have their databases distributed over multiple VHDs. In addition to the
DBMS, there also might be other applications where processes write or manipulate data in different
VHDs and where it is important to keep the order of changes. If that is a requirement, geo-replication
in Azure should not be enabled. Dependent on whether you need or want geo-replication for a set of
VMs, but not for another set, you can already categorize VMs and their related VHDs into different
Storage Accounts that have geo-replication enabled or disabled.
Setting automount for attached disks
Windows
For VMs, which are created from own Images or Disks, it is necessary to check and possibly set the
automount parameter. Setting this parameter will allow the VM after a restart or redeployment in
Azure to mount the attached/mounted drives again automatically. The parameter is set for the
images provided by Microsoft in the Azure Marketplace.
In order to set the automount, check the documentation of the command-line executable
diskpart.exe here:
DiskPart Command-Line Options
Automount
The Windows command-line window should be opened as administrator.
If disks are attached, you need to sign in to the VM to open the Windows Disk Manager. If
automount is not enabled as recommended in chapter Setting automount for attached disks, the
newly attached volume >needs to be taken online and initialized.
Linux
You need to initialize a newly attached empty disk as described in this article. You also need to add
new disks to the /etc/fstab.
Final Deployment
For the final deployment and exact steps, especially with regards to the deployment of the Azure
Extension for SAP, refer to the Deployment Guide.
Windows
By default, the Windows Firewall within an Azure deployed VM is turned on. You now need to allow
the SAP Port to be opened, otherwise the SAP GUI will not be able to connect. To do this:
Open Control Panel\System and Security\Windows Firewall to Advanced Settings .
Now right-click on Inbound Rules and chose New Rule .
In the following Wizard chose to create a new Por t rule.
In the next step of the wizard, leave the setting at TCP and type in the port number you want to
open. Since our SAP instance ID is 00, we took 3200. If your instance has a different instance
number, the port you defined earlier based on the instance number should be opened.
In the next part of the wizard, you need to leave the item Allow Connection checked.
In the next step of the wizard you need to define whether the rule applies for Domain, Private
and Public network. Adjust it if necessary to your needs. However, connecting with SAP GUI
from the outside through the public network, you need to have the rule applied to the public
network.
In the last step of the wizard, name the rule and save by pressing Finish .
The rule becomes effective immediately.
Linux
The Linux images in the Azure Marketplace do not enable the iptables firewall by default and the
connection to your SAP system should work. If you enabled iptables or another firewall, refer to
the documentation of iptables or the used firewall to allow inbound tcp traffic to port 32xx (where
xx is the system number of your SAP system).
Security recommendations
The SAP GUI does not connect immediately to any of the SAP instances (port 32xx) which are running,
but first connects via the port opened to the SAP message server process (port 36xx). In the past, the
same port was used by the message server for the internal communication to the application
instances. To prevent on-premises application servers from inadvertently communicating with a
message server in Azure, the internal communication ports can be changed. It is highly recommended
to change the internal communication between the SAP message server and its application instances
to a different port number on systems that have been cloned from on-premises systems, such as a
clone of development for project testing etc. This can be done with the default profile parameter:
rdisp/msserv_internal
In this scenario we are implementing a typical training/demo system scenario where the complete
training/demo scenario is contained in a single VM. We assume that the deployment is done through
VM image templates. We also assume that multiple of these demo/trainings VMs need to be deployed
with the VMs having the same name. The whole training systems don't have connectivity to your on-
premises assets and are an opposite to a hybrid deployment.
The assumption is that you created a VM Image as described in some sections of chapter Preparing
VMs with SAP for Azure in this document.
The sequence of events to implement the scenario looks like this:
P o w e r Sh e l l
$rgName = "SAPERPDemo1"
New-AzResourceGroup -Name $rgName -Location "North Europe"
Create a new storage account if you don't want to use Managed Disks
Create a new virtual network for every training/demo landscape to enable the usage of the same
hostname and IP addresses. The virtual network is protected by a Network Security Group that only
allows traffic to port 3389 to enable Remote Desktop access and port 22 for SSH.
# Create a new Virtual Network
$rdpRule = New-AzNetworkSecurityRuleConfig -Name SAPERPDemoNSGRDP -Protocol * -SourcePortRange * -
DestinationPortRange 3389 -Access Allow -Direction Inbound -SourceAddressPrefix * -
DestinationAddressPrefix * -Priority 100
$sshRule = New-AzNetworkSecurityRuleConfig -Name SAPERPDemoNSGSSH -Protocol * -SourcePortRange * -
DestinationPortRange 22 -Access Allow -Direction Inbound -SourceAddressPrefix * -
DestinationAddressPrefix * -Priority 101
$nsg = New-AzNetworkSecurityGroup -Name SAPERPDemoNSG -ResourceGroupName $rgName -Location "North
Europe" -SecurityRules $rdpRule,$sshRule
Create a new public IP address that can be used to access the virtual machine from the internet
Create a virtual machine. For this scenario, every VM will have the same name. The SAP SID of the
SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group,
the name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs
with the same name. The default 'Administrator' account of Windows or 'root' for Linux are not
valid. Therefore, a new administrator user name needs to be defined together with a password. The
size of the VM also needs to be defined.
#####
# Create a new virtual machine with an official image from the Azure Marketplace
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11
# select image
$vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "MicrosoftWindowsServer" -Offer
"WindowsServer" -Skus "2012-R2-Datacenter" -Version "latest"
$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred -ProvisionVMAgent -EnableAutoUpdate
# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "SUSE" -Offer "SLES-SAP" -Skus "12-
SP1" -Version "latest"
# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "RedHat" -Offer "RHEL" -Skus "7.2"
-Version "latest"
# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "Oracle" -Offer "Oracle-Linux" -
Skus "7.2" -Version "latest"
# $vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential
$cred
$diskName="osfromimage"
$osDiskUri=$account.PrimaryEndpoints.Blob.ToString() + "vhds/" + $diskName + ".vhd"
#####
# Create a new virtual machine with a Managed Disk Image
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11
Optionally add additional disks and restore necessary content. All blob names (URLs to the blobs)
must be unique within Azure.
CLI
The following example code can be used on Linux. For Windows, either use PowerShell as described
above or adapt the example to use %rgName% instead of $rgName and set the environment variable
using the Windows command set.
Create a new resource group for every training/demo landscape
rgName=SAPERPDemo1
rgNameLower=saperpdemo1
az group create --name $rgName --location "North Europe"
az storage account create --resource-group $rgName --location "North Europe" --kind Storage --sku
Standard_LRS --name $rgNameLower
Create a new virtual network for every training/demo landscape to enable the usage of the same
hostname and IP addresses. The virtual network is protected by a Network Security Group that only
allows traffic to port 3389 to enable Remote Desktop access and port 22 for SSH.
az network nsg create --resource-group $rgName --location "North Europe" --name SAPERPDemoNSG
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name
SAPERPDemoNSGRDP --protocol \* --source-address-prefix \* --source-port-range \* --destination-
address-prefix \* --destination-port-range 3389 --access Allow --priority 100 --direction Inbound
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name
SAPERPDemoNSGSSH --protocol \* --source-address-prefix \* --source-port-range \* --destination-
address-prefix \* --destination-port-range 22 --access Allow --priority 101 --direction Inbound
az network vnet create --resource-group $rgName --name SAPERPDemoVNet --location "North Europe" --
address-prefixes 10.0.1.0/24
az network vnet subnet create --resource-group $rgName --vnet-name SAPERPDemoVNet --name Subnet1 -
-address-prefix 10.0.1.0/24 --network-security-group SAPERPDemoNSG
Create a new public IP address that can be used to access the virtual machine from the internet
az network nic create --resource-group $rgName --location "North Europe" --name SAPERPDemoNIC --
public-ip-address SAPERPDemoPIP --subnet Subnet1 --vnet-name SAPERPDemoVNet
Create a virtual machine. For this scenario, every VM will have the same name. The SAP SID of the
SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group,
the name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs
with the same name. The default 'Administrator' account of Windows or 'root' for Linux are not
valid. Therefore, a new administrator user name needs to be defined together with a password. The
size of the VM also needs to be defined.
#####
# Create virtual machines using storage accounts
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-
username <username> --admin-password <password> --size Standard_D11 --use-unmanaged-disk --
storage-account $rgNameLower --storage-container-name vhds --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password
<password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-
container-name vhds --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password
<password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-
container-name vhds --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-
password <password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --
storage-container-name vhds --os-disk-name os --authentication-type password
#####
# Create virtual machines using Managed Disks
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-
username <username> --admin-password <password> --size Standard_DS11_v2 --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password
<password> --size Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password
<password> --size Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-
password <password> --size Standard_DS11_v2 --os-disk-name os --authentication-type password
#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --os-type Windows --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --
os-disk-name os --image <path to image vhd>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --os-type Linux --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --
os-disk-name os --image <path to image vhd> --authentication-type password
#####
# Create a new virtual machine with a Managed Disk Image
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --admin-username <username> --admin-password <password> --size Standard_DS11_v2 --
os-disk-name os --image <managed disk image id>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --admin-username <username> --admin-password <password> --size Standard_DS11_v2 --
os-disk-name os --image <managed disk image id> --authentication-type password
Optionally add additional disks and restore necessary content. All blob names (URLs to the blobs)
must be unique within Azure.
# Optional: Attach additional VHD data disks
az vm unmanaged-disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --vhd-uri
https://$rgNameLower.blob.core.windows.net/vhds/data.vhd --new
Te m p l a t e
You can use the sample templates on the Azure-quickstart-templates repository on GitHub.
Simple Linux VM
Simple Windows VM
VM from image
Implement a set of VMs that communicate within Azure
This non-hybrid scenario is a typical scenario for training and demo purposes where the software
representing the demo/training scenario is spread over multiple VMs. The different components
installed in the different VMs need to communicate with each other. Again, in this scenario no on-
premises network communication or cross-premises scenario is needed.
This scenario is an extension of the installation described in chapter Single VM with SAP NetWeaver
demo/training scenario of this document. In this case, more virtual machines will be added to an
existing resource group. In the following example, the training landscape consists of an SAP ASCS/SCS
VM, a VM running a DBMS, and an SAP Application Server instance VM.
Before you build this scenario, you need to think about basic settings as already exercised in the
scenario before.
Resource Group and Virtual Machine naming
All resource group names must be unique. Develop your own naming scheme of your resources, such
as <rg-name >-suffix.
The virtual machine name has to be unique within the resource group.
Set up Network for communication between the different VMs
To prevent naming collisions with clones of the same training/demo landscapes, you need to create an
Azure Virtual Network for every landscape. DNS name resolution will be provided by Azure or you can
configure your own DNS server outside Azure (not to be further discussed here). In this scenario, we
do not configure our own DNS. For all virtual machines inside one Azure Virtual Network,
communication via hostnames will be enabled.
The reasons to separate training or demo landscapes by virtual networks and not only resource groups
could be:
The SAP landscape as set up needs its own AD/OpenLDAP and a Domain Server needs to be part of
each of the landscapes.
The SAP landscape as set up has components that need to work with fixed IP addresses.
More details about Azure Virtual Networks and how to define them can be found in this article.
The minimum requirement is the use of secure communication protocols such as SSL/TLS for browser
access or VPN-based connections for system access to the Azure services. The assumption is that
companies handle the VPN connection between their corporate network and Azure differently. Some
companies might blankly open all the ports. Some other companies might want to be precise in which
ports they need to open, etc.
In the table below typical SAP communication ports are listed. Basically it is sufficient to open the SAP
gateway port.
Message server sapms <sid > see 3600 free sapms sid = SAP-System-
** <anySID > ID
Gateway sapgw <nn > see * 3301 free SAP gateway, used
for CPIC and RFC
communication
Setting up your on-premises TCP/IP based network printers in an Azure VM is overall the same as in
your corporate network, assuming you do have a VPN Site-To-Site tunnel or ExpressRoute connection
established.
Windows
To do this:
Some network printers come with a configuration wizard which makes it easy to set up your
printer in an Azure VM. If no wizard software has been distributed with the printer, the manual
way to set up the printer is to create a new TCP/IP printer port.
Open Control Panel -> Devices and Printers -> Add a printer
Choose Add a printer using a TCP/IP address or hostname
Type in the IP address of the printer
Printer Port standard 9100
If necessary install the appropriate printer driver manually.
Linux
like for Windows just follow the standard procedure to install a network printer
just follow the public Linux guides for SUSE or Red Hat and Oracle Linux on how to add a
printer.
H o st - b a se d p r i n t e r o v e r SM B (sh a r e d p r i n t e r ) i n C r o ss- P r e m i se s sc e n a r i o
Host-based printers are not network-compatible by design. But a host-based printer can be shared
among computers on a network as long as the printer is connected to a powered-on computer.
Connect your corporate network either Site-To-Site or ExpressRoute and share your local printer. The
SMB protocol uses NetBIOS instead of DNS as name service. The NetBIOS host name can be different
from the DNS host name. The standard case is that the NetBIOS host name and the DNS host name are
identical. The DNS domain does not make sense in the NetBIOS name space. Accordingly, the fully
qualified DNS host name consisting of the DNS host name and DNS domain must not be used in the
NetBIOS name space.
The printer share is identified by a unique name in the network:
Host name of the SMB host (always needed).
Name of the share (always needed).
Name of the domain if printer share is not in the same domain as SAP system.
Additionally, a user name and a password may be required to access the printer share.
How to:
Windows
Share your local printer. In the Azure VM, open the Windows Explorer and type in the share name
of the printer. A printer installation wizard will guide you through the installation process.
Linux
Here are some examples of documentation about configuring network printers in Linux or
including a chapter regarding printing in Linux. It will work the same way in an Azure Linux VM as
long as the VM is part of a VPN:
SLES https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share
RHEL or Oracle Linux https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/7/html-single/system_administrators_guide/index#sec-
Starting_Print_Settings_Config
U SB P r i n t e r (p r i n t e r fo r w a r d i n g )
In Azure the ability of the Remote Desktop Services to provide users the access to their local printer
devices in a remote session is not available.
Windows
More details on printing with Windows can be found here:
https://technet.microsoft.com/library/jj590748.aspx.
Integration of SAP Azure Systems into Correction and Transport System (TMS) in Cross-Premises
The SAP Change and Transport System (TMS) needs to be configured to export and import transport
request across systems in the landscape. We assume that the development instances of an SAP system
(DEV) are located in Azure whereas the quality assurance (QA) and productive systems (PRD) are on-
premises. Furthermore, we assume that there is a central transport directory.
C o n fi g u r i n g t h e T r a n sp o r t D o m a i n
Configure your Transport Domain on the system you designated as the Transport Domain Controller
as described in Configuring the Transport Domain Controller. A system user TMSADM will be created
and the required RFC destination will be generated. You may check these RFC connections using the
transaction SM59. Hostname resolution must be enabled across your transport domain.
How to:
In our scenario, we decided the on-premises QAS system will be the CTS domain controller. Call
transaction STMS. The TMS dialog box appears. A Configure Transport Domain dialog box is
displayed. (This dialog box only appears if you have not yet configured a transport domain.)
Make sure that the automatically created user TMSADM is authorized (SM59 -> ABAP Connection -
> [email protected]_E61 -> Details -> Utilities(M) -> Authorization Test). The initial screen of
transaction STMS should show that this SAP System is now functioning as the controller of the
transport domain as shown here:
Supportability
Azure Extension for SAP
In order to feed some portion of Azure infrastructure information of mission critical SAP systems to
the SAP Host Agent instances, installed in VMs, an Azure (VM) Extension for SAP needs to get installed
for the deployed VMs. Since the demands by SAP were specific to SAP applications, Microsoft decided
not to generically implement the required functionality into Azure, but leave it for customers to deploy
the necessary VM extension and configurations to their Virtual Machines running in Azure. However,
deployment and lifecycle management of the Azure VM Extension for SAP will be mostly automated by
Azure.
Solution design
The solution developed to enable SAP Host Agent getting the required information is based on the
architecture of Azure VM Agent and Extension framework. The idea of the Azure VM Agent and
Extension framework is to allow installation of software application(s) available in the Azure VM
Extension gallery within a VM. The principle idea behind this concept is to allow (in cases like the Azure
Extension for SAP), the deployment of special functionality into a VM and the configuration of such
software at deployment time.
The 'Azure VM Agent' that enables handling of specific Azure VM Extensions within the VM is injected
into Windows VMs by default on VM creation in the Azure portal. In case of SUSE, Red Hat or Oracle
Linux, the VM agent is already part of the Azure Marketplace image. In case, one would upload a Linux
VM from on-premises to Azure the VM agent has to be installed manually.
The basic building blocks of the solution to provide Azure infrastructure information to SAP Host agent
in Azure looks like this:
As shown in the block diagram above, one part of the solution is hosted in the Azure VM Image and
Azure Extension Gallery, which is a globally replicated repository that is managed by Azure Operations.
It is the responsibility of the joint SAP/MS team working on the Azure implementation of SAP to work
with Azure Operations to publish new versions of the Azure Extension for SAP.
When you deploy a new Windows VM, the Azure VM Agent is automatically added into the VM. The
function of this agent is to coordinate the loading and configuration of the Azure Extensions of the
VMs. For Linux VMs, the Azure VM Agent is already part of the Azure Marketplace OS image.
However, there is a step that still needs to be executed by the customer. This is the enablement and
configuration of the performance collection. The process related to the configuration is automated by a
PowerShell script or CLI command. The PowerShell script can be downloaded in the Microsoft Azure
Script Center as described in the Deployment Guide.
The overall Architecture of the Azure extension for SAP looks like:
For the exact how-to and for detailed steps of using these PowerShell cmdlets or CLI
command during deployments, follow the instructions given in the Deployment Guide .
Integration of Azure located SAP instance into SAProuter
SAP instances running in Azure need to be accessible from SAProuter as well.
A SAProuter enables the TCP/IP communication between participating systems if there is no direct IP
connection. This provides the advantage that no end-to-end connection between the communication
partners is necessary on network level. The SAProuter is listening on port 3299 by default. To connect
SAP instances through a SAProuter, you need to give the SAProuter string and host name with any
attempt to connect.
A special deployment scenario by some customers is the direct exposure of the SAP Enterprise Portal
to the Internet while the virtual machine host is connected to the company network via site-to-site VPN
tunnel or ExpressRoute. For such a scenario, you have to make sure that specific ports are open and
not blocked by firewall or network security group.
The initial portal URI is http(s): <Portalserver >:5XX00/irj where the port is formed as documented by
SAP in
https://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frames
et.htm.
If you want to customize the URL and/or ports of your SAP Enterprise Portal, check this
documentation:
Change Portal URL
Change Default port numbers, Portal port numbers
High Availability (HA) and Disaster Recovery (DR) for SAP
NetWeaver running on Azure Virtual Machines
Definition of terminologies
The term high availability (HA) is generally related to a set of technologies that minimizes IT
disruptions by providing business continuity of IT services through redundant, fault-tolerant, or
failover protected components inside the same data center. In our case, within one Azure Region.
Disaster recover y (DR) is also targeting minimizing IT services disruption, and their recovery but
across different data centers, that are usually located hundreds of kilometers away. In our case usually
between different Azure Regions within the same geopolitical region or as established by you as a
customer.
Overview of High Availability
We can separate the discussion about SAP high availability in Azure into two parts:
Azure infrastructure high availability , for example HA of compute (VMs), network, storage etc.
and its benefits for increasing SAP application availability.
SAP application high availability , for example HA of SAP software components:
SAP application servers
SAP ASCS/SCS instance
DB server
and how it can be combined with Azure infrastructure HA.
SAP High Availability in Azure has some differences compared to SAP High Availability in an on-
premises physical or virtual environment. The following paper from SAP describes standard SAP High
Availability configurations in virtualized environments on Windows: https://scn.sap.com/docs/DOC-
44415. There is no sapinst-integrated SAP-HA configuration for Linux like it exists for Windows.
Regarding SAP HA on-premises for Linux find more information here: https://scn.sap.com/docs/DOC-
8541.
Azure Infrastructure High Availability
There is currently a single-VM SLA of 99.9%. To get an idea how the availability of a single VM might
look like, you can build the product of the different available Azure SLAs:
https://azure.microsoft.com/support/legal/sla/.
The basis for the calculation is 30 days per month, or 43200 minutes. Therefore, 0.05% downtime
corresponds to 21.6 minutes. As usual, the availability of the different services will multiply in the
following way:
(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100)
Like:
(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
Virtual Machine (VM) High Availability
There are two types of Azure platform events that can affect the availability of your virtual machines:
planned maintenance and unplanned maintenance.
Planned maintenance events are periodic updates made by Microsoft to the underlying Azure
platform to improve overall reliability, performance, and security of the platform infrastructure that
your virtual machines run on.
Unplanned maintenance events occur when the hardware or physical infrastructure underlying
your virtual machine has faulted in some way. This may include local network failures, local disk
failures, or other rack level failures. When such a failure is detected, the Azure platform will
automatically migrate your virtual machine from the unhealthy physical server hosting your virtual
machine to a healthy physical server. Such events are rare, but may also cause your virtual machine
to reboot.
For more details, see Availability of Windows virtual machines in Azure and Availability of Linux virtual
machines in Azure.
Azure Storage Redundancy
The data in your Microsoft Azure Storage Account is always replicated to ensure durability and high
availability, meeting the Azure Storage SLA even in the face of transient hardware failures.
Since Azure Storage is keeping three images of the data by default, RAID5 or RAID1 across multiple
Azure disks are not necessary.
For more details, see Azure Storage redundancy.
Utilizing Azure Infrastructure VM Restart to Achieve Higher Availability of SAP Applications
If you decide not to use functionalities like Windows Server Failover Clustering (WSFC) or Pacemaker
on Linux (currently only supported for SLES 12 and higher), Azure VM Restart is utilized to protect an
SAP System against planned and unplanned downtime of the Azure physical server infrastructure and
overall underlying Azure platform.
NOTE
It is important to mention that Azure VM Restart primarily protects VMs and NOT applications. VM Restart
does not offer high availability for SAP applications, but it does offer a certain level of infrastructure availability
and therefore indirectly higher availability of SAP systems. There is also no SLA for the time it will take to
restart a VM after a planned or unplanned host outage. Therefore, this method of high availability is not
suitable for critical components of an SAP system like (A)SCS or DBMS.
Another important infrastructure element for high availability is storage. For example Azure Storage
SLA is 99.9 % availability. If one deploys all VMs with its disks into a single Azure Storage Account,
potential Azure Storage unavailability will cause unavailability of all VMs that are placed in that Azure
Storage Account, and also all SAP components running inside of those VMs.
Instead of putting all VMs into one single Azure Storage Account, you can also use dedicated storage
accounts for each VM, and in this way increase overall VM and SAP application availability by using
multiple independent Azure Storage Accounts.
Azure managed disks are automatically placed in the Fault Domain of the virtual machine they are
attached to. If you place two virtual machines in an availability set and use Managed Disks, the
platform will take care of distributing the Managed Disks into different Fault Domains as well. If you
plan to use Premium Storage, we highly recommend using Manage Disks as well.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and storage
accounts could look like this:
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and Managed
Disks could look like this:
Into the start profile of the SAP ABAP and/or Java instance.
NOTE
The Autostart parameter can have some downfalls as well. In more detail, the parameter triggers the start of an
SAP ABAP or Java instance when the related Windows/Linux service of the instance is started. That certainly is
the case when the operating system boots up. However, restarts of SAP services are also a common thing for
SAP Software Lifecycle Management functionality like SUM or other updates or upgrades. These functionalities
are not expecting an instance to be restarted automatically at all. Therefore, the Autostart parameter should be
disabled before running such tasks. The Autostart parameter also should not be used for SAP instances that
are clustered, like ASCS/SCS/CI.
NOTE
As of Dec 2015 using VM Backup does NOT keep the unique VM ID which is used for SAP licensing. This means
that a restore from a VM backup requires installation of a new SAP license key as the restored VM is
considered to be a new VM and not a replacement of the former one which was saved.
Windows
Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system
supports the Windows VSS (Volume Shadow Copy Service
https://msdn.microsoft.com/library/windows/desktop/bb968832(v=vs.85).aspx) as, for example, SQL Server
does. However, be aware that based on Azure VM backups point-in-time restores of databases are not
possible. Therefore, the recommendation is to perform backups of databases with DBMS functionality instead
of relying on Azure VM Backup.
To get familiar with Azure Virtual Machine Backup start here: /azure/backup/backup-azure-vms.
Other possibilities are to use a combination of Microsoft Data Protection Manager installed in an Azure VM
and Azure Backup to backup/restore databases. More information can be found here: /azure/backup/backup-
azure-dpm-introduction.
Linux
There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not
application-consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file
system which includes the SAP-related data can be saved, for example, using tar as described here:
https://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm
Next steps
Read the articles:
Azure Virtual Machines deployment for SAP NetWeaver
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
[SAP HANA infrastructure configurations and operations on Azure](/- azure/virtual-
machines/workloads/sap/hana-vm-operations)
Azure Storage types for SAP workload
12/22/2020 • 24 minutes to read • Edit Online
Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and prices. Some of the
storage types are not, or of limited usable for SAP scenarios. Whereas, several Azure storage types are well
suited or optimized for specific SAP workload scenarios. Especially for SAP HANA, some Azure storage types got
certified for the usage with SAP HANA. In this document, we are going through the different types of storage
and describe their capability and usability with SAP workloads and SAP components.
Remark about the units used throughout this article. The public cloud vendors moved to use GiB (Gibibyte) or
TiB (Tebibyte as size units, instead of Gigabyte or Terabyte. Therefore all Azure documentation and prizing are
using those units. Throughout the document, we are referencing these size units of MiB, GiB, and TiB units
exclusively. You might need to plan with MB, GB, and TB. So, be aware of some small differences in the
calculations if you need to size for a 400 MiB/sec throughput, instead of a 250 MiB/sec throughput.
NOTE
Azure managed disks provide local redundancy (LRS) only.
DBMS log not supported not supported not supported recommended recommended2
volume SAP
HANA
Esv3/Edsv4 VM
families
DBMS log not supported restricted suitable for up to recommended not supported
volume non- suitable (non- medium
HANA non- prod) workload
M/Mv2 VM
families
1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes 2 Using ANF requires
/hana/data as well as /hana/log to be on ANF
Characteristics you can expect from the different storage types list like:
USA GE P REM IUM A Z URE N ETA P P
SC EN A RIO STA N DA RD H DD STA N DA RD SSD STO RA GE ULT RA DISK F IL ES
Zonal not for managed not for managed not for managed no no
redundancy disks disks disks
1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes
2 Costs depend on provisioned IOPS and throughput
3 Creation of different ANF capacity pools does not guarantee deployment of capacity pools onto different
storage units
IMPORTANT
To achieve less than 1 millisecond I/O latency using Azure NetApp Files (ANF), you need to work with Microsoft to arrange
the correct placement between your VMs and the NFS shares based on ANF. So far there is no mechanism in place that
provides an automatic proximity between a VM deployed and the NFS volumes hosted on ANF. Given the different setup
of the different Azure regions, the network latency added could push the I/O latency beyond 1 millisecond if the VM and
the NFS share are not allocated in proximity.
IMPORTANT
None of the currently offered Azure block storage based managed disks, or Azure NetApp Files offer any zonal or
geographical redundancy. As a result, you need to make sure that your high availability and disaster recovery
architectures are not relying on any type of Azure native storage replication for these managed disks, NFS or SMB shares.
C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S
Maximum IOPS per disk 20,000 dependent on disk size Also consider VM limits
Azure Backup VM snapshots possible YES except for Write Accelerator cached
disks
Costs MEDIUM -
Azure premium storage does not fulfill SAP HANA storage latency KPIs with the common caching types offered
with Azure premium storage. In order to fulfill the storage latency KPIs for SAP HANA log writes, you need to
use Azure Write Accelerator caching as described in the article Enable Write Accelerator. Azure Write Accelerator
benefits all other DBMS systems for their transaction log writes and redo log writes. Therefore, it is
recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of Azure Write
Accelerator in conjunction with Azure premium storage is mandatory.
Summar y: Azure premium storage is one of the Azure storage types recommended for SAP workload. This
recommendation applies for non-production as well as production systems. Azure premium storage is suited to
handle database workloads. The usage of Azure Write Accelerator is going to improve write latency against
Azure premium disks substantially. However, for DBMS systems with high IOPS and throughput rates, you need
to either over-provision storage capacity or you need to use functionality like Windows Storage Spaces or
logical volume managers in Linux to build stripe sets that give you the desired capacity on the one side, but also
the necessary IOPS or throughput at best cost efficiency.
Azure burst functionality for premium storage
For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact
way how disk bursting works is described in the article Disk bursting. When you read the article, you
understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the
nominal IOPS and throughput of the disks (for details on the nominal throughput see Managed Disk pricing).
You are going to accrue the delta of IOPS and throughput between your current usage and the nominal values
of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that
contain data files for the different DBMS. The I/O workload expected against those volumes, especially with
small to mid-ranged systems is expected to look like:
Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA should
be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis
Backup workload that reads in a continuous stream in cases where backups are not executed via storage
snapshots
For SAP HANA, load of the data into memory after an instance restart
Especially on smaller DBMS systems where your workload is handling a few hundred transactions per seconds
only, such a burst functionality can make sense as well for the disks or volumes that store the transaction or
redo log. Expected workload against such a disk or volumes looks like:
Regular writes to the disk that are dependent on the workload and the nature of workload since every
commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes
Read bursts when performing transaction log or redo log backups
Azure Ultra disk
Azure ultra disks deliver high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS
VMs. Some additional benefits of ultra disks include the ability to dynamically change the IOPS and throughput
of the disk, along with your workloads, without the need to restart your virtual machines (VM). Ultra disks are
suited for data-intensive workloads such as SAP DBMS workload. Ultra disks can only be used as data disks and
can't be used as base VHD disk that stores the operating system. We would recommend the usage of Azure
premium storage as based VHD disk.
As you create an ultra disk, you have three dimensions you can define:
The capacity of the disk. Ranges are from 4 GiB to 65,536 GiB
Provisioned IOPS for the disk. Different maximum values apply to the capacity of the disk. Read the article
Ultra disk for more details
Provisioned storage bandwidth. Different maximum bandwidth applies dependent on the capacity of the
disk. Read the article Ultra disk for more details
The cost of a single disk is determined by the three dimensions you can define for the particular disks
separately.
The capability matrix for SAP workload looks like:
C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S
Summar y: Azure ultra disks are a suitable storage with low latency for all kinds of SAP workload. So far, Ultra
disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal
deployment). Ultra disk is not supporting storage snapshots at this point in time. In opposite to all other storage,
Ultra disk cannot be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot
and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for
maximum usage of bandwidth and IOPS.
NOTE
The minimum provisioning size is a 4 TiB unit that is called capacity pool. You then create volumes out of this capacity
pool. Whereas the smallest volume you can build is 100 GiB. You can expand a capacity pool in TiB steps. For pricing,
check the article Azure NetApp Files Pricing
NOTE
No other DBMS workload is supported for Azure NetApp Files based NFS or SMB shares. Updates and changes will be
provided if this is going to change.
As already with Azure premium storage, a fixed or linear throughput size per GB can be a problem when you are
required to adhere to some minimum numbers in throughput. Like this is the case for SAP HANA. With ANF, this
problem can become more pronounced than with Azure premium disk. In case of Azure premium disk, you can
take several smaller disks with a relatively high throughput per GiB and stripe across them to be cost efficient
and have higher throughput at lower capacity. This kind of striping does not work for NFS or SMB shares hosted
on ANF. This restriction resulted in deployment of overcapacity like:
To achieve, for example, a throughput of 250 MiB/sec on an NFS volume hosted on ANF, you need to deploy
1.95 TiB capacity of the Ultra service level.
To achieve 400 MiB/sec, you would need to deploy 3.125 TiB capacity. But you may need the over-
provisioning of capacity to achieve the throughput you require of the volume. This over-provisioning of
capacity impacts the pricing of smaller HANA instances.
In the space of using NFS on top of ANF for the SAP /sapmnt directory, you are usually going far with the
minimum capacity of 100 GiB to 150 GiB that is enforced by Azure NetApp Files. However customer
experience showed that the related throughput of 12.8 MiB/sec (using Ultra service level) may not be enough
and may have negative impact on the stability of the SAP system. In such cases, customers could avoid issues
by increasing the volume of the /sapmnt volume, so, that more throughput is provided to that volume.
The capability matrix for SAP workload looks like:
C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S
Shares/shared disk YES SMB 3.0, NFS v3, and NFS v4.1
C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S
IOPS SLA NO -
Throughput SLA NO -
HANA certified NO -
Costs LOW -
Summar y: Azure standard SSD storage is the minimum recommendation for non-production VMs for base
VHD, eventual DBMS deployments with relative latency insensitivity and/or low IOPS and throughput rates. This
Azure storage type is not supported anymore for hosting the SAP Global Transport Directory.
C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S
IOPS SLA NO -
Throughput SLA NO -
HANA certified NO -
Costs LOW -
Summar y: Standard HDD is an Azure storage type that should only be used to store SAP backups. It should
only be used as base VHD for rather inactive systems, like retired systems used for looking up data here and
there. But no active development, QA or production VMs should be based on that storage. Nor should database
files being hosted on that storage
STO RA GE T Y P E L IN UX W IN DO W S C O M M EN T S
Standard HDD Sizes for Linux VMs in Azure Sizes for Windows VMs in Likely hard to touch the
Azure storage limits of medium or
large VMs
Standard SSD Sizes for Linux VMs in Azure Sizes for Windows VMs in Likely hard to touch the
Azure storage limits of medium or
large VMs
Premium Storage Sizes for Linux VMs in Azure Sizes for Windows VMs in Easy to hit IOPS or storage
Azure throughput VM limits with
storage configuration
Ultra disk storage Sizes for Linux VMs in Azure Sizes for Windows VMs in Easy to hit IOPS or storage
Azure throughput VM limits with
storage configuration
Azure NetApp Files Sizes for Linux VMs in Azure Sizes for Windows VMs in Storage traffic is using
Azure network throughput
bandwidth and not storage
bandwidth!
Next steps
Read the articles:
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP HANA Azure virtual machine storage configurations
SAP workload on Azure virtual machine supported
scenarios
12/22/2020 • 25 minutes to read • Edit Online
Designing SAP NetWeaver, Business one, Hybris or S/4HANA systems architecture in Azure opens a lot of
different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available
deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all
scenarios that are supported on-premises are supported in the same way in Azure. This document will lead
through the supported non-high-availability configurations and high-availability configurations and architectures
using Azure VMs exclusively. For scenarios supported with HANA Large Instances, check the article Supported
scenarios for HANA Large Instances.
2-Tier configuration
An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application
layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the
case of a 2-Tier configuration, the DBMS and SAP application layer share the resources of the Azure VM. As a
result, you need to configure the different components in a way that those don't compete for resources. You also
need to be careful to not oversubscribe the resources of the VM. Such a configuration does not provide any high
availability, beyond the Azure Service Level agreements of the different Azure components involved.
A graphical representation of such a configuration can look like:
Such configurations are supported with Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL
Server, Oracle, Db2, maxDB, and SAP ASE for production and non-production cases. For SAP HANA as DBMS, such
type of configurations is supported for non-production cases only. This includes the deployment case of Azure
HANA Large Instances as well. For all OS/DBMS combinations supported on Azure, this type of configuration is
supported. However, it is mandatory that you set the configuration of the DBMS and the SAP components in a
way that DBMS and SAP components don't compete for memory and CPU resources and thereby exceed the
physical available resources. This needs to be done by restricting the memory the DBMS is allowed to allocate.
You also need to limit the SAP Extended Memory on application instances. You also need to monitor CPU
consumption of the VM overall to make sure that the components are not maximizing the CPU resources.
NOTE
For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as
described later in this document
3-Tier configuration
In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually
do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer.
In the most simple setup, there is no high availability beyond the Azure Service Level agreements of the different
Azure components involved.
The graphical representation looks like:
This type of configuration is supported on Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of
SQL Server, Oracle, Db2, SAP HANA, maxDB, and SAP ASE for production and non-production cases. This is the
default deployment configuration for Azure HANA Large Instances. For simplification, we did not distinguish
between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier
configuration, there would be no high availability protection for SAP Central Services.
NOTE
For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as
described later in this document
For simplification, we did not distinguish between SAP Central Services and SAP dialog instances in the SAP
application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central
Services. For production systems, it is not recommended to leave SAP Central Services unprotected. For specifics
on so called multi-SID configurations around SAP Central Instances and high-availability of such multi-SID
configurations, see later sections of this document.
IMPORTANT
For none of the scenarios described above, we support configurations of multiple DBMS instances in one VM. Means in
each of the cases, only one database instance can be deployed per VM and protected with the described high availability
methods. Protecting multiple DBMS instances under the same Windows or Pacemaker failover cluster is NOT supported at
this point in time. Also Oracle Data Guard is supported for single instance per VM deployment cases only.
Various database systems allow to host multiple databases under one DBMS instance. As in the case of SAP
HANA, multiple databases can be hosted in multiple database containers (MDC). For cases where these multi-
database configurations are working within one failover cluster resource, these configurations are supported.
Configurations that are not supported are cases where multiple cluster resources would be required. As for
configurations where you would define multiple SQL Server Availability Groups, under one SQL Server instance.
Dependent on the DBMS an/or operating systems, components like Azure load balancer might or might not be
required as part of the solution architecture.
Specifically for maxDB, the storage configuration needs to be different. In maxDB the data and log files needs to
be located on shared storage for high availability configurations. Only in the case of maxDB, shared storage is
supported for high availability. For all other DBMS separate storage stacks per node are the only supported disk
configurations.
Other high availability frameworks are known to exist and are known to run on Microsoft Azure as well. However,
Microsoft did not test those frameworks. If you want to build your high availability configuration with those
frameworks, you will need to work with the provider of that software to:
Develop a deployment architecture
Deployment of the architecture
Support of the architecture
IMPORTANT
Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native
storage. These soft appliances can be used to create NFS shares as well that theoretically could be used in the SAP HANA
scale-out deployments where a standby node is required. Due to various reasons, none of these storage soft appliances is
supported for any of the DBMS deployments by Microsoft and SAP on Azure. Deployments of DBMS on SMB shares is not
supported at all at this point in time. Deployments of DBMS on NFS shares is limited to NFS 4.1 shares on Azure NetApp
Files.
NOTE
Usage of Azure Site Recovery has not been tested for DBMS deployments under SAP workload. As a result it is not
supported for the DBMS layer of SAP systems at this point in time. Other methods of replications by Microsoft and SAP
that are not listed are not supported. Using third party software for replicating the DBMS layer of SAP systems between
different Azure Regions, needs to be supported by the vendor of the software and will not be supported through Microsoft
and SAP support channels.
Non-DBMS layer
For the SAP application layer and eventual shares or storage locations that are needed, the two major scenarios
are leveraged by customers:
The disaster recovery targets in the second Azure region are not being used for any production or non-
production purposes. In this scenario, the VMs that function as disaster recovery target are ideally not
deployed and the image and changes to the images of the production SAP application layer is replicated to the
disaster recovery region. A functionality that can perform such a task is Azure Site Recovery. Azure Site
Recovery support an Azure-to-Azure replication scenarios like this.
The disaster recovery targets are VMs that are actually in use by non-production systems. The whole SAP
landscape is spread across two different Azure regions with production systems usually in one region and
non-production systems in another region. In a lot of customer deployments, the customer has a non-
production system that is equivalent to a production system. The customer has production application
instances pre-installed on the application layer non-production systems. In case of a failover, the non-
production instances would be shutdown, the virtual names of the production VMs moved to the non-
production VMs (after assigning new IP addresses in DNS), and the pre-installed production instances are
getting started
SAP Central Services clusters
SAP Central Services clusters that are using shared disks (Windows), SMB shares (Windows) or NFS shares are a
bit harder to replicate. On the Windows side, Windows Storage Replication is a possible solution. On Linux rsync is
a viable solution.
Non-supported scenario
There is a list of scenario, which are not supported for SAP workload on Azure architectures. Not suppor ted
means SAP and Microsoft will not be able to support these configurations and need to defer to an eventual
involved third-party that provided software to establish such architectures. Two of the categories are:
Storage soft appliances: There is a number of storage soft appliances offered in Azure marketplace. Some of
the vendors offer own documentation on how to use those storage soft appliances on Azure related to SAP
software. Support of configurations or deployments involving such storage soft appliances needs to be
provided by the vendor of those storage soft appliances. This fact is also manifested in SAP support note
#2015553
High Availability frameworks: Only Pacemaker and Windows Server Failover Cluster are supported high
availability frameworks for SAP workload on Azure. As mentioned earlier, the solution of SIOS Datakeeper is
described and documented by Microsoft. Nevertheless, the components of SIOS Datakeeper need to be
supported through SIOS as the vendor providing those components. SAP also listed other certified high
availability frameworks in various SAP notes. Some of them were certified by the third-party vendor for Azure
as well. Nevertheless, support for configurations using those products need to be provided by the product
vendor. Different vendors have different integration into the SAP support processes. You should clarify what
support process works best for the particular vendor before deciding to use the product in SAP configurations
deployed on Azure.
Shared disk clusters where database files are residing on the shared disks are not supported with the
exception of maxDB. For all other database, the supported solution is to have separate storage locations
instead of a SMB or NFS share or shared disk to configure high-availability scenarios
Other scenarios, which are not supported are scenarios like:
Deployment scenarios that introduce a larger network latency between the SAP application tier and the SAP
DBMS tier in SAP's common architecture as shown in NetWeaver, S/4HANA and e.g. Hybris . This includes:
Deploying one of the tiers on-premise whereas the other tier is deployed in Azure
Deploying the SAP application tier of a system in a different Azure region than the DBMS tier
Deploying one tier in datacenters that are co-located to Azure and the other tier in Azure, except where
such an architecture patterns are provided by an Azure native service
Deploying network virtual appliances between the SAP application tier and the DBMS layer
Leveraging storage that is hosted in datacenters co-located to Azure datacenter for the SAP DBMS tier
or SAP global transport directory
Deploying the two layers with two different cloud vendors. For example, deploying the DBMS tier in
Oracle Cloud Infrastructure and the application tier in Azure
Multi-Instance HANA Pacemaker cluster configurations
Windows Cluster configurations with shared disks through SOFS or SMB on ANF for SAP databases supported
on Windows. Instead we recommend the usage of native high availability replication of the particular
databases and use separate storage stacks
Deployment of SAP databases supported on Linux with database files located in NFS shares on top of ANF
with the exception of SAP HANA
Deployment of Oracle DBMS on any other guest OS than Windows and Oracle Linux. See also SAP support
note #2039619
Scenario(s) that we did not test and therefore have no experience with list like:
Azure Site Recovery replicating DBMS layer VMs. As a result, we recommend leveraging the database native
asynchronous replication functionality for potential disaster recovery configuration
Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP NetWeaver
What SAP software is supported for Azure
deployments
12/22/2020 • 10 minutes to read • Edit Online
This article describes how you can find out what SAP software is supported for Azure deployments and what the
necessary operating system releases or DBMS releases are.
Evaluating, whether your current SAP software is supported and what OS and DBMS releases are supported with
your SAP software in Azure, you are going to need access to:
SAP support notes
SAP Product availability Matrix
NOTE
There are some specific VM types, HANA Large Instances or SAP workloads that are going to require more recent OS
releases. Cases like that will be mentioned throughout the document. Cases like that are clearly documented either in SAP
notes or other SAP publications.
The section following lists general SAP platforms that are supported with the releases that are supported and
more important the SAP kernels that are supported. It lists NetWeaver/ABAP or Java stacks that are supported
AND, which need minimum kernel releases. More recent ABAP stacks are supported on Azure, but do not need
minimum kernel releases since changes for Azure got implemented from the start of the development of the
more recent stacks
You need to check:
Whether the SAP applications you are running, are covered by the minimum releases stated. If not, you need
to define a new target release, check in the SAP Product Availability Matrix, what operating system builds and
DBMS combinations are supported with the new target release. So, that you can choose the right operating
system release and DBMS release
Whether you need to update your SAP kernels in a move to Azure
Whether you need to update SAP Support Packages. Especially Basis Support Packages that can be required
for cases where you are required to move to a more recent DBMS release
The next section goes into more details on other SAP products and DBMS releases that are supported by SAP on
Azure for Windows and Linux.
NOTE
The minimum releases of the different DBMS is carefully chosen and might not always reflect the whole spectrum of DBMS
releases the different DBMS vendors support on Azure in general. Many SAP workload related considerations were taken
into account to define those minimum releases. There is no effort to test and qualify older DBMS releases.
NOTE
The minimum releases listed are representing older version of operating systems and database releases. We highly
encourage to use most recent operating system releases and database releases. In a lot of cases, more recent operating
system and database releases took the usage case of running in public cloud into consideration and adapted code to
optimize for running in public cloud or more specifically Azure
NOTE
The units starting with the letter 'S' are HANA Large Instances units.
NOTE
SAP has no specific certification dependent on the SAP HANA major releases. Contrary to common opinion, the column
Cer tification scenario in the HANA certified IaaS platforms, the column makes no statement about the HANA
major or minor release cer tified . You need to assume that all the units listed that can be used for HANA 1.0 and
HANA 2.0 as long as the certified operating system releases for the specific units are supported by HANA 1.0 releases as
well.
For the usage of SAP HANA, different minimum OS releases may apply than for the general NetWeaver cases.
You need to check out the supported operating systems for each unit individually since those might vary. You do
so by clicking on each unit. More details will appear. One of the details listed is the different operating systems
supported for this specific unit.
NOTE
Azure HANA Large Instance units are more restrictive with supported operating systems compared to Azure VMs. On the
other hand Azure VMs may enforce more recent operating releases as minimum releases. This is especially true for some
of the larger VM units that required changes to Linux kernels
Knowing the supported OS for the Azure infrastructure, you need to check SAP support note #2235581 for the
exact SAP HANA releases and patch levels that are supported with the Azure units you are targeting.
IMPORTANT
The step of checking the exact SAP HANA releases and patch levels supported is very important. In a lot of cases, support
of a certain OS release is dependent on a specific patch level of the SAP HANA executables.
As you know the specific HANA releases you can run on the targeted Azure infrastructure, you need to check in
the SAP Product Availability Matrix to find out whether there are restrictions with the SAP product releases that
support the HANA releases you filtered out
Certified Azure VMs and HANA Large Instance units and business
transaction throughput
Besides evaluating supported operating system releases, DBMS releases and dependent support SAP software
releases for Azure infrastructure units, you have the need to qualify these units by business transaction
throughput, which is expressed in the unit 'SAP' by SAP. All the SAP sizing depends on SAPS calculations.
Evaluating existing SAP systems, you usually can, with the help of your infrastructure provider, calculate the SAPS
of the units. For the DBMS layer as well as for the application layer. In other cases where new functionality is
created, a sizing exercise with SAP can reveal the required SAPS numbers for the application layer and the DBMS
layer. As infrastructure provider Microsoft is obliged to provide the SAP throughput characterization of the
different units that are either NetWeaver and/or HANA certified.
For Azure VMs, these SAPS throughput numbers are documented in SAP support note #1928533. For Azure
HANA Large Instance units, the SAPS throughput numbers are documented in SAP support note #2316233
Looking into SAP support note #1928533, the following remarks apply:
For M-Series Azure VMs and Mv2-Series Azure VMs, different minimum OS releases apply than
for other Azure VM types . The requirement for more recent OS releases is based on changes the different
operating system vendors had to provide in their operating system releases to either enable their operating
systems running on the specific Azure VM types or optimize performance and throughput of SAP workload
on those VM types
There are two tables that specify different VM types. The second table specifies SAPS throughput for Azure
VM types that support Azure standard Storage only. DBMS deployment on the units specified in the second
table of the note is not supported
NOTE
As indicated in the SAP support note, you need to check in the SAP PAM to identify the correct support package level to
be supported on Azure
SAP Datahub/Vora support in Azure Kubernetes Services (AKS) is detailed in SAP support note #2464722
Support for SAP BPC 10.1 SP08 is described in SAP support note #2451795
Support for SAP Hybris Commerce Platform on Azure is detailed in the Hybris Documentation. As of supported
DBMS for SAP Hybris Commerce Platform, it lists like:
SQL Server and Oracle on the Windows operating system platform. Same minimum releases apply as for
SAP NetWeaver. See SAP support note #1928533 for details
SAP HANA on Red Hat and SUSE Linux. SAP HANA certified VM types are required as documented earlier in
this document. SAP (Hybris) Commerce Platform is considered OLTP workload
SQL Azure DB as of SAP (Hybris) Commerce Platform version 1811
Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP
NetWeaver
12/22/2020 • 57 minutes to read • Edit Online
NOTE
Azure has two different deployment models you can use to create and work with resources: Azure Resource
Manager and classic. This article covers the use of the Resource Manager deployment model. We recommend the
Resource Manager deployment model for new deployments instead of the classic deployment model.
Azure Virtual Machines is the solution for organizations that need compute and storage resources, in
minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy
classical applications, like SAP NetWeaver-based applications, in Azure. Extend an application's reliability
and availability without additional on-premises resources. Azure Virtual Machines supports cross-
premises connectivity, so you can integrate Azure Virtual Machines into your organization's on-premises
domains, private clouds, and SAP system landscape.
In this article, we cover the steps to deploy SAP applications on virtual machines (VMs) in Azure,
including alternate deployment options and troubleshooting. This article builds on the information in
Azure Virtual Machines planning and implementation for SAP NetWeaver. It also complements SAP
installation documentation and SAP Notes, which are the primary resources for installing and deploying
SAP software.
Prerequisites
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module,
which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module
and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation
instructions, see Install Azure PowerShell.
Setting up an Azure virtual machine for SAP software deployment involves multiple steps and resources.
Before you start, make sure that you meet the prerequisites for installing SAP software on virtual
machines in Azure.
Local computer
To manage Windows or Linux VMs, you can use a PowerShell script and the Azure portal. For both tools,
you need a local computer running Windows 7 or a later version of Windows. If you want to manage
only Linux VMs and you want to use a Linux computer for this task, you can use Azure CLI.
Internet connection
To download and run the tools and scripts that are required for SAP software deployment, you must be
connected to the Internet. The Azure VM that is running the Azure Extension for SAP also needs access to
the Internet. If the Azure VM is part of an Azure virtual network or on-premises domain, make sure that
the relevant proxy settings are set, as described in Configure the proxy.
Microsoft Azure subscription
You need an active Azure account.
Topology and networking
You need to define the topology and architecture of the SAP deployment in Azure:
Azure storage accounts to be used
Virtual network where you want to deploy the SAP system
Resource group to which you want to deploy the SAP system
Azure region where you want to deploy the SAP system
SAP configuration (two-tier or three-tier)
VM sizes and the number of additional data disks to be mounted to the VMs
SAP Correction and Transport System (CTS) configuration
Create and configure Azure storage accounts (if required) or Azure virtual networks before you begin
the SAP software deployment process. For information about how to create and configure these
resources, see Azure Virtual Machines planning and implementation for SAP NetWeaver.
SAP sizing
Know the following information, for SAP sizing:
Projected SAP workload, for example, by using the SAP Quick Sizer tool, and the SAP Application
Performance Standard (SAPS) number
Required CPU resource and memory consumption of the SAP system
Required input/output (I/O) operations per second
Required network bandwidth of eventual communication between VMs in Azure
Required network bandwidth between on-premises assets and the Azure-deployed SAP system
Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application resources in
your Azure subscription. For more information, see Azure Resource Manager overview.
Resources
SAP resources
When you are setting up your SAP software deployment, you need the following SAP resources:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in
Azure.
SAP Note 1409604 has the required SAP Host Agent version for Windows in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 2002167 has general information about Red Hat Enterprise Linux 7.x.
SAP Note 2069760 has general information about Oracle Linux 7.x.
SAP Note 1999351 has additional troubleshooting information for the Azure Extension for SAP.
SAP Note 1597355 has general information about swap-space for Linux.
SAP on Azure SCN page has news and a collection of useful resources.
SAP Community WIKI has all required SAP Notes for Linux.
SAP-specific PowerShell cmdlets that are part of Azure PowerShell.
SAP-specific Azure CLI commands that are part of Azure CLI.
Windows resources
These Microsoft articles cover SAP deployments in Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver (this article)
Azure Virtual Machines DBMS deployment for SAP NetWeaver
Windows
To prepare a Windows image that you can use to deploy multiple virtual machines, the Windows
settings (like Windows SID and hostname) must be abstracted or generalized on the on-premises
VM. You can use sysprep to do this.
Linux
To prepare a Linux image that you can use to deploy multiple virtual machines, some Linux settings
must be abstracted or generalized on the on-premises VM. You can use waagent -deprovision to do
this. For more information, see Capture a Linux virtual machine running on Azure and the Azure
Linux agent user guide.
You can prepare and create a custom image, and then use it to create multiple new VMs. This is
described in Azure Virtual Machines planning and implementation for SAP NetWeaver. Set up your
database content either by using SAP Software Provisioning Manager to install a new SAP system
(restores a database backup from a disk that's attached to the virtual machine) or by directly restoring a
database backup from Azure storage, if your DBMS supports it. For more information, see Azure Virtual
Machines DBMS deployment for SAP NetWeaver. If you have already installed an SAP system on your
on-premises VM (especially for two-tier systems), you can adapt the SAP system settings after the
deployment of the Azure VM by using the System Rename procedure supported by SAP Software
Provisioning Manager (SAP Note 1619720). Otherwise, you can install the SAP software after you
deploy the Azure VM.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM from a custom
image:
Windows
Azure Virtual Machine Agent overview
Linux
Azure Linux Agent User Guide
The following flowchart shows the sequence of steps for moving an on-premises VM by using a non-
generalized Azure VHD:
If the disk is already uploaded and defined in Azure (see Azure Virtual Machines planning and
implementation for SAP NetWeaver), do the tasks described in the next few sections.
Create a virtual machine
To create a deployment by using a private OS disk through the Azure portal, use the SAP template
published in the azure-quickstart-templates GitHub repository. You also can manually create a virtual
machine, by using PowerShell.
Two-tier configuration (only one vir tual machine) template (sap-2-tier-user-disk)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one vir tual machine) template - Managed Disk (sap-2-tier-
user-disk-md)
To create a two-tier system by using only one virtual machine and a Managed Disk, use this
template.
In the Azure portal, enter the following parameters for the template:
1. Basics :
Subscription : The subscription to use to deploy the template.
Resource group : The resource group to use to deploy the template. You can create a new
resource group or select an existing resource group in the subscription.
Location : Where to deploy the template. If you selected an existing resource group, the
location of that resource group is used.
2. Settings :
SAP System ID : The SAP System ID.
OS type : The operating system type you want to deploy (Windows or Linux).
SAP system size : The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the
system requires, ask your SAP Technology Partner or System Integrator.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more
information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine workloads
Introduction to Microsoft Azure Storage
OS disk VHD URI (unmanaged disk template only): The URI of the private OS disk, for
example, https://<accountname>.blob.core.windows.net/vhds/osdisk.vhd.
OS disk Managed Disk ID (managed disk template only): The ID of the Managed Disk
OS disk, /subscriptions/92d102f7-81a5-4df7-9877-
54987ba97dd9/resourceGroups/group/providers/Microsoft.Compute/disks/WIN
New or existing subnet : Determines whether a new virtual network and subnet are
created, or an existing subnet is used. If you already have a virtual network that is
connected to your on-premises network, select Existing .
Subnet ID : If you want to deploy the VM into an existing VNet where you have a subnet
defined the VM should be assigned to, name the ID of that specific subnet. The ID usually
looks like this: /subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
3. Terms and conditions :
Review and accept the legal terms.
4. Select Purchase .
Install the VM Agent
To use the templates described in the preceding section, the VM Agent must be installed on the OS disk,
or the deployment will fail. Download and install the VM Agent in the VM, as described in Download,
install, and enable the Azure VM Agent.
If you don't use the templates described in the preceding section, you can also install the VM Agent
afterwards.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure
site-to-site VPN connection or ExpressRoute (this is called cross-premises in Azure Virtual Machines
planning and implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises
domain. For more information about considerations for this task, see Join a VM to an on-premises
domain (Windows only).
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your
VM. If your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not
be able to access the Internet, and won't be able to download the required VM extensions or collect
Azure infrastructure information for the SAP Host agent via the SAP extension for Azure, see Configure
the proxy.
Configure Azure VM Extension for SAP
To be sure SAP supports your environment, set up the Azure Extension for SAP as described in
Configure the Azure Extension for SAP. Check the prerequisites for SAP, and required minimum versions
of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
SAP VM check
Check whether the VM extension for SAP is working, as described in Checks and Troubleshooting.
(Get-Module Az.Compute).Version
Deploy Azure CLI
Follow the steps described in the article Install the Azure CLI
Check frequently for updates to Azure CLI, which usually is updated monthly.
To check the version of Azure CLI that is installed on your computer, run this command:
az --version
If the agent is already installed, to update the Azure Linux Agent, do the steps described in Update the
Azure Linux Agent on a VM to the latest version from GitHub.
Configure the proxy
The steps you take to configure the proxy in Windows are different from the way you configure the
proxy in Linux.
Windows
Proxy settings must be set up correctly for the Local System account to access the Internet. If your proxy
settings are not set by Group Policy, you can configure the settings for the Local System account.
1. Go to Star t , enter gpedit.msc , and then select Enter .
2. Select Computer Configuration > Administrative Templates > Windows Components >
Internet Explorer . Make sure that the setting Make proxy settings per-machine (rather than
per-user) is disabled or not configured.
3. In Control Panel , go to Network and Sharing Center > Internet Options .
4. On the Connections tab, select the L AN settings button.
5. Clear the Automatically detect settings check box.
6. Select the Use a proxy ser ver for your L AN check box, and then enter the proxy address and
port.
7. Select the Advanced button.
8. In the Exceptions box, enter the IP address 168.63.129.16 . Select OK .
Linux
Configure the correct proxy in the configuration file of the Microsoft Azure Guest Agent, which is located
at \etc\waagent.conf.
Set the following parameters:
1. HTTP proxy host . For example, set it to proxy.corp.local .
HttpProxy.Host=<proxy host>
The proxy settings in \etc\waagent.conf also apply to the required VM extensions. If you want to use the
Azure repositories, make sure that the traffic to these repositories is not going through your on-
premises intranet. If you created user-defined routes to enable forced tunneling, make sure that you add
a route that routes traffic to the repositories directly to the Internet, and not through your site-to-site
VPN connection.
SLES
You also need to add routes for the IP addresses listed in \etc\regionserverclnt.cfg. The following
figure shows an example:
RHEL
You also need to add routes for the IP addresses of the hosts listed in \etc\yum.repos.d\rhui-load-
balancers. For an example, see the preceding figure.
Oracle Linux
There are no repositories for Oracle Linux on Azure. You need to configure your own repositories
for Oracle Linux or use the public repositories.
For more information about user-defined routes, see User-defined routes and IP forwarding.
Configure the Azure Extension for SAP
NOTE
General Support Statement: Please always open an incident with SAP on component BC-OP-NT-AZR for
Windows or BC-OP-LNX-AZR if you need support for the Azure Extension for SAP. There are dedicated Microsoft
support engineers working in the SAP support system to help our joint customers.
When you've prepared the VM as described in Deployment scenarios of VMs for SAP on Azure, the
Azure VM Agent is installed on the virtual machine. The next step is to deploy the Azure Extension for
SAP, which is available in the Azure Extension Repository in the global Azure datacenters. For more
information, see Azure Virtual Machines planning and implementation for SAP NetWeaver.
We are in the process of releasing a new version of the Azure Extension for SAP. The new extension uses
the system assigned identity of the virtual machine to get information about the attached disks, network
interfaces and the virtual machine itself. To be able to access these resources, the system identity of the
virtual machine needs Reader permission for the virtual machine, OS disk, data disks and network
interfaces. We currently recommend to only install the new extension in the following scenarios:
1. You want to install the extension with Terraform, Azure Resource Manager Templates or with other
means than Azure CLI or Azure PowerShell
2. You want to install the extension on SUSE SLES 15 or higher.
3. Microsoft or SAP support asks you to install the new extension
4. You want to use Azure Ultra Disk or Standard Managed Disks
For these scenarios, follow the steps in chapter Configure the new Azure Extension for SAP with Azure
PowerShell for Azure PowerShell or Configure the new Azure Extension for SAP with Azure CLI for Azure
CLI.
Follow Azure PowerShell or Azure CLI to install and configure the standard version of the Azure
Extension for SAP.
Azure PowerShell for Linux and Windows VMs
To install the Azure Extension for SAP by using PowerShell:
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet. For more
information, see Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run cmdlet
Get-AzEnvironment . If you want to use global Azure, your environment is AzureCloud . For Azure
China 21Vianet, select AzureChinaCloud .
After you enter your account data, the script deploys the required extensions and enables the required
features. This can take several minutes. For more information about Set-AzVMAEMExtension , see Set-
AzVMAEMExtension.
The Set-AzVMAEMExtension configuration does all the steps to configure host data collection for SAP.
The script output includes the following information:
Confirmation that data collection for the OS disk and all additional data disks has been configured.
The next two messages confirm the configuration of Storage Metrics for a specific storage account.
One line of output gives the status of the actual update of the VM Extension for SAP configuration.
Another line of output confirms that the configuration has been deployed or updated.
The last line of output is informational. It shows your options for testing the VM Extension for SAP
configuration.
To check that all steps of Azure VM Extension for SAP configuration have been executed successfully,
and that the Azure Infrastructure provides the necessary data, proceed with the readiness check for
the Azure Extension for SAP, as described in Readiness check for Azure Extension for SAP.
Wait 15-30 minutes for Azure Diagnostics to collect the relevant data.
Azure CLI for Linux VMs
To install the Azure Extension for SAP by using Azure CLI:
1. Install Azure classic CLI, as described in Install the Azure classic CLI.
2. Sign in with your Azure account:
azure login
az login
6. Verify that the Azure Extension for SAP is active on the Azure Linux VM. Check whether the file
\var\lib\AzureEnhancedMonitor\PerfCounters exists. If it exists, at a command prompt, run this
command to display information collected by the Azure Extension for SAP:
cat /var/lib/AzureEnhancedMonitor/PerfCounters
...
2;cpu;Current Hw Frequency;;0;2194.659;MHz;60;1444036656;saplnxmon;
2;cpu;Max Hw Frequency;;0;2194.659;MHz;0;1444036656;saplnxmon;
...
Configure the new Azure Extension for SAP with Azure PowerShell
The new VM Extension for SAP uses a Managed Identity assigned to the VM to access monitoring and
configuration data of the VM. To install the new Azure Extension for SAP by using PowerShell, you first
have to assign such an identity to the VM and grant that identity access to all resources that are in use
by that VM, for example disks and network interfaces.
NOTE
The following steps require Owner privileges over the resource group or individual resources (virtual machine,
data disks etc.)
Configure the new Azure Extension for SAP with Azure CLI
The new VM Extension for SAP uses a Managed Identity assigned to the VM to access monitoring and
configuration data of the VM. To install the new Azure Extension for SAP by using Azure CLI, you first
have to assign such an identity to the VM and grant that identity access to all resources that are in use
by that VM, for example disks and network interfaces.
NOTE
The following steps require Owner privileges over the resource group or individual resources (virtual machine,
data disks etc.)
az login
5. Follow the steps in the Configure managed identities for Azure resources on an Azure VM using
Azure CLI article to enable a System-Assigned Managed Identity to the VM. User-Assigned
Managed Identities are not supported by the VM extension for SAP. However, you can enable
both, a system-assigned and a user-assigned identity.
Example:
6. Assign the Managed Identity access to the resource group of the VM or to all network interfaces,
managed disks and the VM itself as described in Assign a managed identity access to a resource
using Azure CLI
Example:
7. Run the following Azure CLI command to install the Azure Extension for SAP. The extension is
currently only supported in AzureCloud. Azure China 21Vianet, Azure Government or any of the
other special environments are not yet supported.
This check makes sure that all performance metrics that appear inside your SAP application are provided
by the underlying Azure Extension for SAP.
Run the readiness check on a Windows VM
1. Sign in to the Azure virtual machine (using an admin account is not necessary).
2. Open a Command Prompt window.
3. At the command prompt, change the directory to the installation folder of the Azure Extension for
SAP:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop
The version in the path to the extension might vary. If you see folders for multiple versions of the
extension in the installation folder, check the configuration of the AzureEnhancedMonitoring
Windows service, and then switch to the folder indicated as Path to executable.
NOTE
Azperflib.exe runs in a loop and updates the collected counters every 60 seconds. To end the loop, close
the Command Prompt window.
If the Azure Extension for SAP is not installed, or the AzureEnhancedMonitoring service is not running,
the extension has not been configured correctly. For detailed information about how to deploy the
extension, see Troubleshooting the Azure Extension for SAP.
NOTE
The Azperflib.exe is a component that can't be used for own purposes. It is a component which delivers Azure
infrastructure data related to the VM for the SAP Host Agent exclusively.
C h e c k t h e o u t p u t o f a z p e r fl i b .e x e
Azperflib.exe output shows all populated Azure performance counters for SAP. At the bottom of the list
of collected counters, a summary and health indicator show the status of Azure Extension for SAP.
Check the result returned for the Counters total output, which is reported as empty, and for Health
status , shown in the preceding figure.
Interpret the resulting values as follows:
API Calls - not available Counters that are not available might be either not
applicable to the virtual machine configuration, or are
errors. See Health status .
Counters total - empty The following two Azure storage counters can be empty:
Storage Read Op Latency Server msec
Storage Read Op Latency E2E msec
All other counters must have values.
If the Health status value is not OK , follow the instructions in Health check for Azure Extension for SAP
configuration.
Run the readiness check on a Linux VM
1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the Azure Extension for SAP.
a. Run more /var/lib/AzureEnhancedMonitor/PerfCounters
Expected result : Returns list of performance counters. The file should not be empty.
b. Run cat /var/lib/AzureEnhancedMonitor/PerfCounters | grep Error
Expected result : Returns one line where the error is none , for example,
3;config;Error ;;0;0;none;0;1456416792;tst-ser vercs;
c. Run more /var/lib/AzureEnhancedMonitor/LatestErrorRecord
Expected result : Displays one entry similar to: python /usr/sbin/waagent -daemon
2. Make sure that the Azure Extension for SAP is installed and running.
a. Run
sudo sh -c 'ls -al /var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-*/'
Expected result : Lists the content of the Azure Extension for SAP directory.
b. Run ps -ax | grep AzureEnhanced
3. Install SAP Host Agent as described in SAP Note 1031096, and check the output of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d
NOTE
There are two versions of the VM extension. This chapter covers the new VM extension. If you have installed the
default VM extension, please see chapter Readiness check for Azure Extension for SAP.
This check makes sure that all performance metrics that appear inside your SAP application are provided
by the underlying Azure Extension for SAP.
Run the readiness check on a Windows VM
1. Sign in to the Azure virtual machine (using an admin account is not necessary).
2. Open a web browser and navigate to http://127.0.0.1:11812/azure4sap/metrics
3. The browser should display or download an XML file that contains the monitoring data of your
virtual machine. If that is not the case, make sure that the Azure Extension for SAP is installed.
C h e c k t h e c o n t e n t o f t h e X M L fi l e
The XML file that you can access at http://127.0.0.1:11812/azure4sap/metrics contains all populated
Azure performance counters for SAP. It also contains a summary and health indicator of the status of
Azure Extension for SAP.
Check the value of the Provider Health Description element. If the value is not OK , follow the
instructions in Health check for new Azure Extension for SAP configuration.
Run the readiness check on a Linux VM
1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the following command
curl http://127.0.0.1:11812/azure4sap/metrics
Expected result : Returns an XML document that contains the monitoring information of the
virtual machine, its disks and network interfaces.
If the preceding check was not successful, run these additional checks:
1. Make sure that the waagent is installed and enabled.
a. Run sudo ls -al /var/lib/waagent/
Expected result : Displays one entry similar to: python /usr/sbin/waagent -daemon
2. Make sure that the Azure Extension for SAP is installed and running.
a. Run
sudo sh -c 'ls -al
/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-*/'
Expected result : Lists the content of the Azure Extension for SAP directory.
b. Run ps -ax | grep AzureEnhanced
3. Install SAP Host Agent as described in SAP Note 1031096, and check the output of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d
If some of the infrastructure data is not delivered correctly as indicated by the test described in
Readiness check for Azure Extension for SAP, run the Test-AzVMAEMExtension cmdlet to check whether
the Azure infrastructure and the Azure Extension for SAP are configured correctly.
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet, as described
in Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run the cmdlet
Get-AzEnvironment . To use global Azure, select the AzureCloud environment. For Azure China
21Vianet, select AzureChinaCloud .
3. The script tests the configuration of the virtual machine you select.
Make sure that every health check result is OK . If some checks do not display OK , run the update cmdlet
as described in Configure the Azure Extension for SAP. Wait 15 minutes, and repeat the checks described
in Readiness check for Azure Extension for SAP and Health check for Azure Extension for SAP
configuration. If the checks still indicate a problem with some or all counters, see Troubleshooting the
Azure Extension for SAP.
NOTE
You can experience some warnings in cases where you use Managed Standard Azure Disks. Warnings will be
displayed instead of the tests returning "OK". This is normal and intended in case of that disk type. See also see
Troubleshooting the Azure Extension for SAP
Health check for the new Azure Extension for SAP configuration
NOTE
There are two versions of the VM extension. This chapter covers the new VM extension. If you have installed the
default VM extension, please see chapter Health check for the Azure Extension for SAP configuration.
If some of the infrastructure data is not delivered correctly as indicated by the test described in
Readiness check for Azure Extension for SAP, run the Get-AzVMExtension cmdlet to check whether the
Azure Extension for SAP is installed. The Test-AzVMAEMExtension does not yet support the new extension.
Once the cmdlet supports the new extension, we will update this article.
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet, as described
in Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run the cmdlet
Get-AzEnvironment . To use global Azure, select the AzureCloud environment. For Azure China
21Vianet, select AzureChinaCloud .
3. The cmdlet tests the configuration of the VM extension for SAP on virtual machine you select.
Troubleshooting Azure Extension for SAP
NOTE
There are two versions of the VM extension. This chapter covers the default VM extension. If you have installed
the new VM extension, please see chapter Troubleshooting the new Azure Extension for SAP.
The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine or rerun the Set-AzVMAEMExtension configuration script.
Se r v i c e fo r A z u r e Ex t e n si o n fo r SA P d o e s n o t e x i st
Issue
So l u t i o n
If the service does not exist, the Azure Extension for SAP has not been installed correctly. Redeploy the
extension by using the steps described for your deployment scenario in Deployment scenarios of VMs
for SAP in Azure.
After you deploy the extension, after one hour, check again whether the Azure performance counters are
provided in the Azure VM.
Se r v i c e fo r A z u r e Ex t e n si o n fo r SA P e x i st s, b u t fa i l s t o st a r t
Issue
The AzureEnhancedMonitoring Windows service exists and is enabled, but fails to start. For more
information, check the application event log.
So l u t i o n
The configuration is incorrect. Restart the Azure Extension for SAP in the VM, as described in Configure
the Azure Extension for SAP.
Some Azure performance counters are missing
The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. The service gets
data from several sources. Some configuration data is collected locally, and some performance metrics
are read from Azure Diagnostics. Storage counters are used from your logging on the storage
subscription level.
If troubleshooting by using SAP Note 1999351 doesn't resolve the issue, rerun the
Set-AzVMAEMExtension configuration script. You might have to wait an hour because storage analytics or
diagnostics counters might not be created immediately after they are enabled. If the problem persists,
open an SAP customer support message on the component BC-OP-NT-AZR for Windows or BC-OP-
LNX-AZR for a Linux virtual machine.
The directory \var\lib\waagent\ does not have a subdirectory for the Azure Extension for SAP.
So l u t i o n
The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine and/or rerun the Set-AzVMAEMExtension configuration script.
T h e e x e c u t i o n o f Se t - A z V M A E M Ex t e n si o n a n d Te st - A z V M A E M Ex t e n si o n sh o w w a r n i n g m e ssa g e s st a t i n g t h a t St a n d a r d M a n a g e d
D i sk s a r e n o t su p p o r t e d
Issue
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be installed but no disk
metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be installed but no disk
metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be installed but no disk
metrics will be available.
Executing azperfli.exe as described earlier you can get a result that is indicating a non-healthy state.
So l u t i o n
The messages are caused by the fact that Standard Managed Disks are not delivering the APIs used by
the SAP Extension for SAP to check on statistics of the Standard Azure Storage Accounts. This is not a
matter of concern. Reason for introducing the collecting data for Standard Disk Storage accounts was
throttling of inputs and outputs that occurred frequently. Managed disks will avoid such throttling by
limiting the number of disks in a storage account. Therefore, not having that type of that data is not
critical.
NOTE
There are two versions of the VM extension. This chapter covers the new VM extension. If you have installed the
default VM extension, please see chapter Troubleshooting the Azure Extension for SAP.
The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine or install the VM extension again.
Some Azure performance counters are missing
The AzureEnhancedMonitoring Windows process collects performance metrics in Azure. The process
gets data from several sources. Some configuration data is collected locally, and some performance
metrics are read from Azure Monitor.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, open an SAP customer
support message on the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux
virtual machine. Please attach the log file
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Windows\
<version>\logapp.txt to the incident.
The directory \var\lib\waagent\ does not have a subdirectory for the Azure Extension for SAP.
So l u t i o n
The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine and/or install the VM extension again.
1) outdated configuration
2) no network connection to Azure
3) issues with WAD setup
This guide is part of the documentation on how to implement and deploy SAP software on Microsoft Azure.
Before you read this guide, read the Planning and implementation guide and articles the planning guide
points you to. This document covers the generic deployment aspects of SAP-related DBMS systems on
Microsoft Azure virtual machines (VMs) by using the Azure infrastructure as a service (IaaS) capabilities.
The paper complements the SAP installation documentation and SAP Notes, which represent the primary
resources for installations and deployments of SAP software on given platforms.
In this document, considerations of running SAP-related DBMS systems in Azure VMs are introduced. There
are few references to specific DBMS systems in this chapter. Instead, the specific DBMS systems are handled
within this paper, after this document.
Definitions
Throughout the document, these terms are used:
IaaS : Infrastructure as a service.
PaaS : Platform as a service.
SaaS : Software as a service.
SAP component : An individual SAP application such as ERP Central Component (ECC), Business
Warehouse (BW), Solution Manager, or Enterprise Portal (EP). SAP components can be based on
traditional ABAP or Java technologies or on a non-NetWeaver-based application such as Business
Objects.
SAP environment : One or more SAP components logically grouped to perform a business function
such as development, quality assurance, training, disaster recovery, or production.
SAP landscape : This term refers to the entire SAP assets in a customer's IT landscape. The SAP
landscape includes all production and nonproduction environments.
SAP system : The combination of a DBMS layer and an application layer of, for example, an SAP ERP
development system, an SAP Business Warehouse test system, or an SAP CRM production system. In
Azure deployments, dividing these two layers between on-premises and Azure isn't supported. As a
result, an SAP system is either deployed on-premises or it's deployed in Azure. You can deploy the
different systems of an SAP landscape in Azure or on-premises. For example, you could deploy the
SAP CRM development and test systems in Azure but deploy the SAP CRM production system on-
premises.
Cross-premises : Describes a scenario where VMs are deployed to an Azure subscription that has
site-to-site, multisite, or Azure ExpressRoute connectivity between the on-premises data centers and
Azure. In common Azure documentation, these kinds of deployments are also described as cross-
premises scenarios.
The reason for the connection is to extend on-premises domains, on-premises Active Directory, and
on-premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the
subscription. With this extension, the VMs can be part of the on-premises domain. Domain users of
the on-premises domain can access the servers and run services on those VMs, like DBMS services.
Communication and name resolution between VMs deployed on-premises and VMs deployed in
Azure is possible. This scenario is the most common scenario in use to deploy SAP assets on Azure.
For more information, see Planning and design for VPN gateway.
NOTE
Cross-premises deployments of SAP systems are where Azure virtual machines that run SAP systems are members of
an on-premises domain and are supported for production SAP systems. Cross-premises configurations are supported
for deploying parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure
requires those VMs to be part of an on-premises domain and Active Directory/LDAP.
In previous versions of the documentation, hybrid-IT scenarios were mentioned. The term hybrid is rooted in the fact
that there's a cross-premises connectivity between on-premises and Azure. In this case, hybrid also means that the
VMs in Azure are part of the on-premises Active Directory.
Some Microsoft documentation describes cross-premises scenarios a bit differently, especially for DBMS
high-availability configurations. In the case of the SAP-related documents, the cross-premises scenario boils
down to site-to-site or private ExpressRoute connectivity and an SAP landscape that's distributed between
on-premises and Azure.
Resources
There are other articles available on SAP workload on Azure. Start with SAP workload on Azure: Get started
and then choose your area of interest.
The following SAP Notes are related to SAP on Azure in regard to the area covered in this document.
N OT E N UM B ER T IT L E
2233094 DB6: SAP applications on Azure using IBM DB2 for Linux,
UNIX, and Windows: Additional information
For information on all the SAP Notes for Linux, see the SAP community wiki.
You need a working knowledge of Microsoft Azure architecture and how Microsoft Azure virtual machines
are deployed and operated. For more information, see Azure documentation.
In general, the Windows, Linux, and DBMS installation and configuration are essentially the same as any
virtual machine or bare metal machine you install on-premises. There are some architecture and system
management implementation decisions that are different when you use Azure IaaS. This document explains
the specific architectural and system management differences to be prepared for when you use Azure IaaS.
NOTE
For DBMS deployments, we recommend Azure premium storage, Ultra disk or Azure NetApp Files based NFS shares
(exclusively for SAP HANA) for any data, transaction log, or redo files. It doesn't matter whether you want to deploy
production or nonproduction systems.
NOTE
To benefit from Azure's single VM SLA, all disks that are attached must be Azure premium storage or Azure Ultra disk
type, which includes the base VHD (Azure premium storage).
NOTE
Hosting main database files, such as data and log files, of SAP databases on storage hardware that's located in co-
located third-party data centers adjacent to Azure data centers isn't supported. Storage provided through software
appliances hosted in Azure VMs, are also not supported for this use case. For SAP DBMS workloads, only storage
that's represented as native Azure service is supported for the data and transaction log files of SAP databases in
general. Different DBMS might support different Azure storage types. For more details check the article Azure Storage
types for SAP workload
The placement of the database files and the log and redo files and the type of Azure Storage you use, is
defined by IOPS, latency, and throughput requirements. Specifically for Azure premium storage to achieve
enough IOPS, you might be forced to use multiple disks or use a larger premium storage disk. If you use
multiple disks, build a software stripe across the disks that contain the data files or the log and redo files. In
such cases, the IOPS and the disk throughput SLAs of the underlying premium storage disks or the
maximum achievable IOPS of standard storage disks are accumulative for the resulting stripe set.
If your IOPS requirement exceeds what a single VHD can provide, balance the number of IOPS that are
needed for the database files across a number of VHDs. The easiest way to distribute the IOPS load across
disks is to build a software stripe over the different disks. Then place a number of data files of the SAP DBMS
on the LUNs carved out of the software stripe. The number of disks in the stripe is driven by IOPS demands,
disk throughput demands, and volume demands.
Windows
We recommend that you use Windows Storage Spaces to create stripe sets across multiple Azure VHDs.
Use at least Windows Server 2012 R2 or Windows Server 2016.
Linux
Only MDADM and Logical Volume Manager (LVM) are supported to build a software RAID on Linux. For
more information, see:
Configure software RAID on Linux using MDADM
Configure LVM on a Linux VM in Azure using LVM
For Azure Ultra disk, striping is not necessary since you can define IOPS and disk throughput independent of
the size of the disk.
NOTE
Because Azure Storage keeps three images of the VHDs, it doesn't make sense to configure a redundancy when you
stripe. You only need to configure striping so that the I/Os are distributed over the different VHDs.
IMPORTANT
Given the advantages of Azure Managed Disks, we highly recommend that you use Azure Managed Disks for your
DBMS deployments and SAP deployments in general.
Linux
Linux Azure VMs automatically mount a drive at /mnt/resource that's a nonpersisted drive backed by
local disks on the Azure compute node. Because it's nonpersisted, any changes made to content in
/mnt/resource are lost when the VM is rebooted. Changes include files that were stored, directories that
were created, and applications that were installed.
NOTE
Azure premium storage, Ultra disk and Azure NetApp Files (exclusively for SAP HANA) are the recommended type of
storage for DBMS VMs and disks that store database and log and redo files. The only available redundancy method
for these storage types is LRS. As a result, you need to configure database methods to enable database data
replication into another Azure region or availability zone. Database methods include SQL Server Always On, Oracle
Data Guard, and HANA System Replication.
NOTE
For DBMS deployments, the use of geo-redundant storage (GRS) isn't recommended for standard storage. GRS
severely affects performance and doesn't honor the write order across different VHDs that are attached to a VM. Not
honoring the write order across different VHDs potentially leads to inconsistent databases on the replication target
side. This situation occurs if database and log and redo files are spread across multiple VHDs, as is generally the case,
on the source VM side.
VM node resiliency
Azure offers several different SLAs for VMs. For more information, see the most recent release of SLA for
Virtual Machines. Because the DBMS layer is critical to availability in an SAP system, you need to understand
availability sets, Availability Zones, and maintenance events. For more information on these concepts, see
Manage the availability of Windows virtual machines in Azure and Manage the availability of Linux virtual
machines in Azure.
The minimum recommendation for production DBMS scenarios with an SAP workload is to:
Deploy two VMs in a separate availability set in the same Azure region.
Run these two VMs in the same Azure virtual network and have NICs attached out of the same subnets.
Use database methods to keep a hot standby with the second VM. Methods can be SQL Server Always
On, Oracle Data Guard, or HANA System Replication.
You also can deploy a third VM in another Azure region and use the same database methods to supply an
asynchronous replica in another Azure region.
For information on how to set up Azure availability sets, see this tutorial.
NOTE
Assigning static IP addresses through Azure means to assign them to individual virtual NICs. Don't assign static IP
addresses within the guest OS to a virtual NIC. Some Azure services like Azure Backup rely on the fact that at least
the primary virtual NIC is set to DHCP and not to static IP addresses. For more information, see Troubleshoot Azure
virtual machine backup. To assign multiple static IP addresses to a VM, assign multiple virtual NICs to a VM.
WARNING
Configuring network virtual appliances in the communication path between the SAP application and the DBMS layer
of a SAP NetWeaver-, Hybris-, or S/4HANA-based SAP system isn't supported. This restriction is for functionality and
performance reasons. The communication path between the SAP application layer and the DBMS layer must be a
direct one. The restriction doesn't include application security group (ASG) and NSG rules if those ASG and NSG rules
allow a direct communication path.
Other scenarios where network virtual appliances aren't supported are in:
Communication paths between Azure VMs that represent Linux Pacemaker cluster nodes and SBD devices as
described in High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP
Applications.
Communication paths between Azure VMs and Windows Server Scale-Out File Server (SOFS) set up as described
in Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in Azure.
Network virtual appliances in communication paths can easily double the network latency between two
communication partners. They also can restrict throughput in critical paths between the SAP application layer and the
DBMS layer. In some customer scenarios, network virtual appliances can cause Pacemaker Linux clusters to fail. These
are cases where communications between the Linux Pacemaker cluster nodes communicate to their SBD device
through a network virtual appliance.
IMPORTANT
Another design that's not supported is the segregation of the SAP application layer and the DBMS layer into different
Azure virtual networks that aren't peered with each other. We recommend that you segregate the SAP application
layer and DBMS layer by using subnets within an Azure virtual network instead of by using different Azure virtual
networks.
If you decide not to follow the recommendation and instead segregate the two layers into different virtual networks,
the two virtual networks must be peered.
Be aware that network traffic between two peered Azure virtual networks is subject to transfer costs. Huge data
volume that consists of many terabytes is exchanged between the SAP application layer and the DBMS layer. You can
accumulate substantial costs if the SAP application layer and DBMS layer are segregated between two peered Azure
virtual networks.
Use two VMs for your production DBMS deployment within an Azure availability set or between two Azure
Availability Zones. Also use separate routing for the SAP application layer and the management and
operations traffic to the two DBMS VMs. See the following image:
NOTE
There are differences in behavior of the basic and standard SKU related to the access of public IP addresses. The way
how to work around the restrictions of the Standard SKU to access public IP addresses is described in the document
Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability
scenarios
NOTE
Not all VM types support Accelerated Networking. The previous article lists the VM types that support Accelerated
Networking.
Windows
To learn how to deploy VMs with Accelerated Networking for Windows, see Create a Windows virtual
machine with Accelerated Networking.
Linux
For more information on Linux distribution, see Create a Linux virtual machine with Accelerated
Networking.
NOTE
In the case of SUSE, Red Hat, and Oracle Linux, Accelerated Networking is supported with recent releases. Older
releases like SLES 12 SP2 or RHEL 7.2 don't support Azure Accelerated Networking.
Next steps
For more information on a particular DBMS, see:
SQL Server Azure Virtual Machines DBMS deployment for SAP workload
Oracle Azure Virtual Machines DBMS deployment for SAP workload
IBM DB2 Azure Virtual Machines DBMS deployment for SAP workload
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP maxDB, Live Cache, and Content Server deployment on Azure
SAP HANA on Azure operations guide
SAP HANA high availability for Azure virtual machines
Backup guide for SAP HANA on Azure virtual machines
SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver
12/22/2020 • 27 minutes to read • Edit Online
This document covers several different areas to consider when deploying SQL Server for SAP workload in Azure
IaaS. As a precondition to this document, you should have read the document Considerations for Azure Virtual
Machines DBMS deployment for SAP workload as well as other guides in the SAP workload on Azure
documentation.
IMPORTANT
The scope of this document is the Windows version on SQL Server. SAP is not supporting the Linux version of SQL Server
with any of the SAP software. The document is not discussing Microsoft Azure SQL Database, which is a Platform as a
Service offer of the Microsoft Azure Platform. The discussion in this paper is about running the SQL Server product as it is
known for on-premises deployments in Azure Virtual Machines, leveraging the Infrastructure as a Service capability of Azure.
Database capabilities and functionality between these two offers are different and should not be mixed up with each other.
See also: https://azure.microsoft.com/services/sql-database/
In general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The
latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have
changes that optimize operations in an Azure IaaS infrastructure.
It is recommended to review the article [What is SQL Server on Azure Virtual Machines (Windows)]
[https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-
overview] before continuing.
In the following sections, pieces of parts of the documentation under the link above are aggregated and
mentioned. Specifics around SAP are mentioned as well and some concepts are described in more detail. However,
it is highly recommended to work through the documentation above first before reading the SQL Server-specific
documentation.
There is some SQL Server in IaaS specific information you should know before continuing:
SQL Version Suppor t : For SAP customers, SQL Server 2008 R2 and higher is supported on Microsoft Azure
Virtual Machine. Earlier editions are not supported. Review this general Support Statement for more details. In
general, SQL Server 2008 is supported by Microsoft as well. However due to significant functionality for SAP,
which was introduced with SQL Server 2008 R2, SQL Server 2008 R2 is the minimum release for SAP. In
general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The
latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have
changes that optimize operations in an Azure IaaS infrastructure. Therefore, the paper is restricted to SQL
Server 2016 and SQL Server 2017.
SQL Performance : Microsoft Azure hosted Virtual Machines perform well in comparison to other public cloud
virtualization offerings, but individual results may vary. Check out the article Performance best practices for
SQL Server in Azure Virtual Machines.
Using Images from Azure Marketplace : The fastest way to deploy a new Microsoft Azure VM is to use an
image from the Azure Marketplace. There are images in the Azure Marketplace, which contain the most recent
SQL Server releases. The images where SQL Server already is installed can't be immediately used for SAP
NetWeaver applications. The reason is the default SQL Server collation is installed within those images and not
the collation required by SAP NetWeaver systems. In order to use such images, check the steps documented in
chapter Using a SQL Server image out of the Microsoft Azure Marketplace.
NOTE
in case you place tempdb data files and log file into a folder on D:\ drive that you created, you need to make sure that the
folder does exist after a VM reboot. Since the D:\ drive is freshly initialized after a VM reboot all file and directory structures
are wiped out. A possibility to recreate eventual directory structures on D:\ drive before the start of the SQL Server service is
documented in this article.
A VM configuration, which runs SQL Server with an SAP database and where tempdb data and tempdb logfile are
placed on the D:\ drive would look like:
The diagram above displays a simple case. As eluded to in the article Considerations for Azure Virtual Machines
DBMS deployment for SAP workload, Azure storage type, number, and size of disks is dependent from different
factors. But in general we recommend:
Using one large volume, which contain the SQL Server data files. Reason behind this configuration is that in
real life there are numerous SAP databases with different sized database files with different I/O workload.
Use the D:\drive for tempdb as long as performance is good enough. If the overall workload is limited in
performance by tempdb being located on the D:\ drive you might need to consider to move tempdb to separate
Azure premium storage or Ultra disk disks as recommended in this article.
Special for M -Series VMs
For Azure M-Series VM, the latency writing into the transaction log can be reduced by factors, compared to Azure
Premium Storage performance, when using Azure Write Accelerator. Hence, you should deploy Azure Write
Accelerator for the VHD(s) that form the volume for the SQL Server transaction log. Details can be read in the
document Write Accelerator.
Formatting the disks
For SQL Server, the NTFS block size for disks containing SQL Server data and log files should be 64 KB. There is no
need to format the D:\ drive. This drive comes pre-formatted.
In order to make sure that the restore or creation of databases is not initializing the data files by zeroing the
content of the files, you should make sure that the user context the SQL Server service is running in has a certain
permission. Usually users in the Windows Administrator group have these permissions. If the SQL Server service
is run in the user context of non-Windows Administrator user, you need to assign that user the User Right
Perform volume maintenance tasks . See the details in this Microsoft Knowledge Base Article:
https://support.microsoft.com/kb/2574695
Impact of database compression
In configurations where I/O bandwidth can become a limiting factor, every measure, which reduces IOPS might
help to stretch the workload one can run in an IaaS scenario like Azure. Therefore, if not yet done, applying SQL
Server PAGE compression is recommended by both SAP and Microsoft before uploading an existing SAP database
to Azure.
The recommendation to perform Database Compression before uploading to Azure is given out of two reasons:
The amount of data to be uploaded is lower.
The duration of the compression execution is shorter assuming that one can use stronger hardware with more
CPUs or higher I/O bandwidth or less I/O latency on-premises.
Smaller database sizes might lead to less costs for disk allocation
Database compression works as well in an Azure Virtual Machines as it does on-premises. For more details on
how to compress existing SAP NetWeaver SQL Server databases, check the article Improved SAP compression tool
MSSCOMPRESS.
SQL Server 2014 and more recent - Storing Database Files directly on
Azure Blob Storage
SQL Server 2014 and later releases open the possibility to store database files directly on Azure Blob Store without
the 'wrapper' of a VHD around them. Especially with using Standard Azure Storage or smaller VM types this type
of deployment enables scenarios where you can overcome the limits of IOPS that would be enforced by a limited
number of disks that can be mounted to some smaller VM types. This way of deployment works for user databases
however not for system databases of SQL Server. It also works for data and log files of SQL Server. If you'd like to
deploy an SAP SQL Server database this way instead of 'wrapping' it into VHDs, keep in mind:
The Storage Account used needs to be in the same Azure Region as the one that is used to deploy the VM SQL
Server is running in.
Considerations listed earlier regarding the distribution of VHDs over different Azure Storage Accounts apply for
this method of deployments as well. Means the I/O operations count against the limits of the Azure Storage
Account.
Instead of accounting against the VM's storage I/O quota, the traffic against storage blobs representing the SQL
Server data and log files, will be accounted into the VM's network bandwidth of the specific VM type. For
network and storage bandwidth of a particular VM type, consult the article Sizes for Windows virtual machines
in Azure.
As a result of pushing file I/O through the network quota, you are stranding the storage quota mostly and with
that use the overall bandwidth of the VM only partially.
The IOPS and I/O throughput Performance targets that Azure Premium Storage has for the different disk sizes
do not apply anymore. Even if the blobs you created are located on Azure Premium Storage. The targets are
documented the article High-performance Premium Storage and managed disks for VMs. As a result of placing
SQL Server data files and log files directly on blobs that are stored on Azure Premium Storage, the performance
characteristics can be different compared to VHDs on Azure Premium Storage.
Host based caching as available for Azure Premium Storage disks is not available when placing SQL Server data
files directly on Azure blobs.
On M-Series VMs, Azure Write Accelerator can't be used to support sub-millisecond writes against the SQL
Server transaction log file.
Details of this functionality can be found in the article SQL Server data files in Microsoft Azure
Recommendation for production systems is to avoid this configuration and rather choose the placements of SQL
Server data and log files in Azure Premium Storage VHDs instead of directly on Azure blobs.
Latin1-General, binary code point comparison sort for Unicode Data, SQL Server Sort Order 40 on Code Page 850
for non-Unicode Data
If the result is different, STOP deploying SAP and investigate why the setup command did not work as expected.
Deployment of SAP NetWeaver applications onto SQL Server instance with different SQL Server codepages than
the one mentioned above is NOT supported.
NOTE
If you are configuring the Azure load balancer for the virtual IP address of the Availability Group listener, make sure that the
DirectServerReturn is configured. configuring this option will reduce the network round trip latency between the SAP
application layer and the DBMS layer.
SQL Server Always On is the most common used high availability and disaster recovery functionality used in
Azure for SAP workload deployments. Most customers use Always On for high availability within a single Azure
Region. If the deployment is restricted to two nodes only, you have two choices for connectivity:
Using the Availability Group Listener. With the Availability Group Listener, you are required to deploy an Azure
load balancer. This way is the default method of deployment. SAP applications would be configured to connect
against the Availability Group listener and not against a single node
Using the connectivity parameters of SQL Server Database Mirroring. In this case, you need to configure the
connectivity of the SAP applications in a way where both node names are named. Exact details of such an SAP
side configuration is documented in SAP Note #965908. By using this option, you would have no need to
configure an Availability Group listener. And with that no Azure load balancer for the SQL Server high
availability. As a result, the network latency between the SAP application layer and the DBMS layer is lower
since the incoming traffic to the SQL Server instance is not routed through the Azure load balancer. But recall,
this option only works if you restrict your Availability Group to span two instances.
Quite a few customers are leveraging the SQL Server Always On functionality for additional disaster recovery
functionality between Azure regions. Several customers also use the ability to perform backups from a secondary
replica.
IMPORTANT
Using SQL Server TDE, especially with Azure key Vault, it is recommended to use the latest patches of SQL Server 2014, SQL
Server 2016, and SQL Server 2017. Reason is that based on customer feedback, optimizations and fixes got applied to the
code. As an example, check KBA #4058175.
Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
Azure Virtual Machines Oracle DBMS deployment
for SAP workload
12/22/2020 • 13 minutes to read • Edit Online
This document covers several different areas to consider when you're deploying Oracle Database for SAP workload
in Azure IaaS. Before you read this document, we recommend you read Considerations for Azure Virtual Machines
DBMS deployment for SAP workload. We also recommend that you read other guides in the SAP workload on
Azure documentation.
You can find information about Oracle versions and corresponding OS versions that are supported for running
SAP on Oracle on Azure in SAP Note 2039619.
General information about running SAP Business Suite on Oracle can be found at SAP on Oracle. Oracle software
is supported by Oracle to run on Microsoft Azure. For more information about general support for Windows
Hyper-V and Azure, check the Oracle and Microsoft Azure FAQ.
N OT E N UM B ER T IT L E
The exact configurations and functionality that are supported by Oracle and SAP on Azure are documented in SAP
Note #2039619.
Windows and Oracle Linux are the only operating systems that are supported by Oracle and SAP on Azure. The
widely used SLES and RHEL Linux distributions aren't supported for deploying Oracle components in Azure. Oracle
components include the Oracle Database client, which is used by SAP applications to connect against the Oracle
DBMS.
Exceptions, according to SAP Note #2039619, are SAP components that don't use the Oracle Database client. Such
SAP components are SAP's stand-alone enqueue, message server, Enqueue replication services, WebDispatcher,
and SAP Gateway.
Even if you're running your Oracle DBMS and SAP application instances on Oracle Linux, you can run your SAP
Central Services on SLES or RHEL and protect it with a Pacemaker-based cluster. Pacemaker as a high-availability
framework isn't supported on Oracle Linux.
C O M P O N EN T DISK C A C H IN G STO RA GE P O O L
Disks selection for hosting online redo logs should be driven by IOPS requirements. It's possible to store all
sapdata1...n (tablespaces) on one single mounted disk as long as the size, IOPS, and throughput satisfy the
requirements.
The performance configuration is as follows:
C O M P O N EN T DISK C A C H IN G STO RA GE P O O L
*(n+1): hosting SYSTEM, TEMP, and UNDO tablespaces. The I/O pattern of System and Undo tablespaces are
different from other tablespaces hosting application data. No caching is the best option for performance of the
System and Undo tablespaces.
*oraarch: storage pool isn't necessary from a performance point of view. It can be used to get more space.
If more IOPS are required in case of Azure premium storage, we recommend using Windows Storage Pools (only
available in Windows Server 2012 and later) to create one large logical device over multiple mounted disks. This
approach simplifies the administration overhead for managing the disk space, and helps you avoid the effort of
manually distributing files across multiple mounted disks.
Write Accelerator
For Azure M-Series VMs, the latency writing into the online redo logs can be reduced by factors when compared to
Azure premium storage. Enable Azure Write Accelerator for the disks (VHDs) based on Azure Premium Storage
that are used for online redo log files. For more information, see Write Accelerator. Or use Azure Ultra disk for the
online redo log volume.
Backup/restore
For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same way as they are on
standard Windows Server operating systems. Oracle Recovery Manager (RMAN) is also supported for backups to
disk and restores from disk.
You can also use Azure Backup to run an application-consistent VM backup. The article Plan your VM backup
infrastructure in Azure explains how Azure Backup uses the Windows VSS functionality for executing application-
consistent backups. The Oracle DBMS releases that are supported on Azure by SAP can leverage the VSS
functionality for backups. For more information, see the Oracle documentation Basic concepts of database backup
and recovery with VSS.
High availability
Oracle Data Guard is supported for high availability and disaster recovery purposes. To achieve automatic failover
in Data Guard, your need to use Fast-Start Failover (FSFA). The Observer (FSFA) triggers the failover. If you don't
use FSFA, you can only use a manual failover configuration.
For more information about disaster recovery for Oracle databases in Azure, see Disaster recovery for an Oracle
Database 12c database in an Azure environment.
Accelerated networking
For Oracle deployments on Windows, we strongly recommend accelerated networking as described in Azure
accelerated networking. Also consider the recommendations that are made in Considerations for Azure Virtual
Machines DBMS deployment for SAP workload.
Other
Considerations for Azure Virtual Machines DBMS deployment for SAP workload describes other important
concepts related to deployments of VMs with Oracle Database, including Azure availability sets and SAP
monitoring.
C O M P O N EN T DISK C A C H IN G ST RIP P IN G*
C O M P O N EN T DISK C A C H IN G ST RIP P IN G*
Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
IBM Db2 Azure Virtual Machines DBMS deployment
for SAP workload
12/22/2020 • 9 minutes to read • Edit Online
With Microsoft Azure, you can migrate your existing SAP application running on IBM Db2 for Linux, UNIX, and
Windows (LUW) to Azure virtual machines. With SAP on IBM Db2 for LUW, administrators and developers can still
use the same development and administration tools, which are available on-premises. General information about
running SAP Business Suite on IBM Db2 for LUW can be found in the SAP Community Network (SCN) at
https://www.sap.com/community/topic/db2-for-linux-unix-and-windows.html.
For more information and updates about SAP on Db2 for LUW on Azure, see SAP Note 2233094.
The are various articles on SAP workload on Azure released. It is recommended to start in SAP workload on Azure
- Get Started and then pick the area of interests
The following SAP Notes are related to SAP on Azure regarding the area covered in this document:
N OT E N UM B ER T IT L E
2233094 DB6: SAP Applications on Azure Using IBM DB2 for Linux,
UNIX, and Windows - Additional Information
As a pr-read to this document, you should have read the document Considerations for Azure Virtual Machines
DBMS deployment for SAP workload as well as other guides in the SAP workload on Azure documentation.
vCPU: /db2//s P10 2 1,000 200 256 7,000 340 256 KB ReadO
4 apdata nly
Small SAP system: database size 200 - 750 GB: small Business Suite
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG
vCPU: /db2//s P15 4 4,400 500 1.024 14,000 680 256 KB ReadO
16 apdata nly
Medium SAP system: database size 500 - 1000 GB: small Business Suite
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG
vCPU: /db2//s P30 2 10,000 400 2.048 10,000 400 256 KB ReadO
32 apdata nly
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG
Large SAP system: database size 750 - 2000 GB: Business Suite
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG
vCPU: /db2//s P30 4 20,000 800 4.096 20,000 800 256 KB ReadO
64 apdata nly
Large multi-terabyte SAP system: database size 2 TB+: Global Business Suite system
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG
vCPU: /db2//s P40 4 30,000 1.000 8.192 30,000 1.000 256 KB ReadO
128 apdata nly
Backup/Restore
The backup/restore functionality for IBM Db2 for LUW is supported in the same way as on standard Windows
Server Operating Systems and Hyper-V.
Make sure that you have a valid database backup strategy in place.
As in bare-metal deployments, backup/restore performance depends on how many volumes can be read in
parallel and what the throughput of those volumes might be. In addition, the CPU consumption used by backup
compression may play a significant role on VMs with up to eight CPU threads. Therefore, one can assume:
The fewer the number of disks used to store the database devices, the smaller the overall throughput in reading
The smaller the number of CPU threads in the VM, the more severe the impact of backup compression
The fewer targets (Stripe Directories, disks) to write the backup to, the lower the throughput
To increase the number of targets to write to, two options can be used/combined depending on your needs:
Striping the backup target volume over multiple disks in order to improve the IOPS throughput on that striped
volume
Using more than one target directory to write the backup to
NOTE
Db2 on Windows does not support the Windows VSS technology. As a result, the application consistent VM backup of
Azure Backup Service can't be leveraged for VMs the Db2 DBMS is deployed in.
Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
High availability of IBM Db2 LUW on Azure VMs on
SUSE Linux Enterprise Server with Pacemaker
12/22/2020 • 26 minutes to read • Edit Online
IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery (HADR) configuration
consists of one node that runs a primary database instance and at least one node that runs a secondary database
instance. Changes to the primary database instance are replicated to a secondary database instance
synchronously or asynchronously, depending on your configuration.
NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster
framework, and install the IBM Db2 LUW with HADR configuration.
The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To
help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses
on parts that are specific to the Azure environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note 1928533.
Before you begin an installation, see the following SAP notes and documentation:
SA P N OT E DESC RIP T IO N
2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux,
UNIX, and Windows - additional information
SAP Community Wiki: Has all of the required SAP Notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux guide
Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide
SUSE Linux Enterprise Server for SAP Applications 12 SP4 best practices guides
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are
deployed in an Azure availability set or across Azure Availability Zones.
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have
their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has
the role of the primary instance. All clients are connected to this primary instance. All changes in database
transactions are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally,
the records are transferred via TCP/IP to the database instance on the second database server, the standby server,
or standby instance. The standby instance updates the local database by rolling forward the transferred
transaction log records. In this way, the standby server is kept in sync with the primary server.
HADR is only a replication functionality. It has no failure detection and no automatic takeover or failover facilities.
A takeover or transfer to the standby server must be initiated manually by a database administrator. To achieve an
automatic takeover and failure detection, you can use the Linux Pacemaker clustering feature. Pacemaker monitors
the two database server instances. When the primary database server instance crashes, Pacemaker initiates an
automatic HADR takeover by the standby server. Pacemaker also ensures that the virtual IP address is assigned to
the new primary server.
To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP
address. In the event of a failover, the SAP application servers will connect to new primary database instance. In an
Azure environment, an Azure load balancer is required to use a virtual IP address in the way that's required for
HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a highly available SAP system
setup, the following image presents an overview of a highly available setup of an SAP system based on IBM Db2
database. This article covers only IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.
High-level overview of the required steps
To deploy an IBM Db2 configuration, you need to follow these steps:
Plan your environment.
Deploy the VMs.
Update SUSE Linux and configure file systems.
Install and configure Pacemaker.
Install highly available NFS.
Install ASCS/ERS on a separate cluster.
Install IBM Db2 database with Distributed/High Availability option (SWPM).
Install and create a secondary database node and instance, and configure HADR.
Confirm that HADR is working.
Apply the Pacemaker configuration to control IBM Db2.
Configure Azure Load Balancer.
Install primary and dialog application servers.
Check and adapt the configuration of SAP application servers.
Perform failover and takeover tests.
Plan Azure infrastructure for hosting IBM Db2 LUW with HADR
Complete the planning process before you execute the deployment. Planning builds the foundation for deploying
a configuration of Db2 with HADR in Azure. Key elements that need to be part of planning for IMB Db2 LUW
(database part of SAP environment) are listed in the following table:
TO P IC SH O RT DESC RIP T IO N
Define Azure resource groups Resource groups where you deploy VM, VNet, Azure Load
Balancer, and other resources. Can be existing or new.
Virtual network / Subnet definition Where VMs for IBM Db2 and Azure Load Balancer are being
deployed. Can be existing or newly created.
Virtual machines hosting IBM Db2 LUW VM size, storage, networking, IP address.
Virtual host name and virtual IP for IBM Db2 database The virtual IP or host name that's used for connection of SAP
application servers. db-vir t-hostname , db-vir t-ip .
Azure Load Balancer Usage of Basic or Standard (recommended), probe port for
Db2 database (our recommendation 62500) probe-por t .
Name resolution How name resolution works in the environment. DNS service
is highly recommended. Local hosts file can be used.
For more information about Linux Pacemaker in Azure, see Set up Pacemaker on SUSE Linux Enterprise Server in
Azure.
IMPORTANT
Write down the "Database Communication port" that's set during installation. It must be the same port number for both
database instances
To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these
steps:
1. Select the System copy option > Target systems > Distributed > Database instance .
2. As a copy method, select Homogeneous System so that you can use backup to restore a backup on the
standby server instance.
3. When you reach the exit step to restore the database for homogeneous system copy, exit the installer.
Restore the database from a backup of the primary host. All subsequent installation phases have already
been executed on the primary database server.
4. Set up HADR for IBM Db2.
NOTE
For installation and configuration that's specific to Azure and Pacemaker: During the installation procedure through
SAP Software Provisioning Manager, there is an explicit question about high availability for IBM Db2 LUW:
Do not select IBM Db2 pureScale .
Do not select Install IBM Tivoli System Automation for Multiplatforms .
Do not select Generate cluster configuration files .
When you use an SBD device for Linux Pacemaker, set the following Db2 HADR parameters:
HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 300
HADR timeout value (HADR_TIMEOUT) = 60
When you use an Azure Pacemaker fencing agent, set the following parameters:
HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 900
HADR timeout value (HADR_TIMEOUT) = 60
We recommend the preceding parameters based on initial failover/takeover testing. It is mandatory that you test
for proper functionality of failover and takeover with these parameter settings. Because individual configurations
can vary, the parameters might require adjustment.
IMPORTANT
Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up
and running before you can start the primary database instance.
For demonstration purposes and the procedures described in this article, the database SID is PTR .
IBM Db2 HADR check
After you've configured HADR and the status is PEER and CONNECTED on the primary and standby nodes,
perform the following check:
#Primary output:
# Database Member 0 -- Database PTR -- Active -- Up 1 days 01:51:38 -- Date 2019-02-06-15.35.28.505451
#
# HADR_ROLE = PRIMARY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 1
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.170561 (1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6137
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 13
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000025
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223713
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 374400
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_RECV_REPLAY_GAP(bytes) = 0
# PRIMARY_LOG_TIME = 02/06/2019 15:34:39.000000 (1549467279)
# STANDBY_LOG_TIME = 02/06/2019 15:34:39.000000 (1549467279)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:34:39.000000 (1549467279)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:40:25.000000 (1549467625)
# READS_ON_STANDBY_ENABLED = N
#Secondary output:
# Database Member 0 -- Database PTR -- Standby -- Up 1 days 01:46:43 -- Date 2019-02-06-15.38.25.644168
#
# HADR_ROLE = STANDBY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 0
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.205067 (1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6186
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 5
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000023
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223725
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 372480
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_RECV_REPLAY_GAP(bytes) = 155
# PRIMARY_LOG_TIME = 02/06/2019 15:37:34.000000 (1549467454)
# STANDBY_LOG_TIME = 02/06/2019 15:37:34.000000 (1549467454)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:37:34.000000 (1549467454)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:43:19.000000 (1549467799)
# READS_ON_STANDBY_ENABLED = N
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling
only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes
unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend using
azure-lb resource agent, which is part of package resource-agents, with the following package version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-Balancer
Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
[1] Make sure that the cluster status is OK and that all of the resources are started. It's not important which node
the resources are running on.
# 2 nodes configured
# 5 resources configured
NOTE
The Standard Load Balancer SKU has restrictions accessing public IP addresses from the nodes underneath the Load
Balancer. The article Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-
availability scenarios is describing ways on how to enable those nodes to access public IP addresses
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname
/sapmnt/<SID>/global/db6/db2cli.ini
Hostname=db-virt-hostname
2 nodes configured
5 resources configured
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Test takeover of IBM Db2
IMPORTANT
Before you start the test, make sure that:
Pacemaker doesn't have any failed actions (crm status).
There are no location constraints (leftovers of migration test)
The IBM Db2 HADR synchronization is working. Check with user db2<sid>
Migrate the node that's running the primary Db2 database by executing following command:
After the migration is done, the crm status output looks like:
2 nodes configured
5 resources configured
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Resource migration with "crm resource migrate" creates location constraints. Location constraints should be
deleted. If location constraints are not deleted, the resource cannot fail back or you can experience unwanted
takeovers.
Migrate the resource back to azibmdb01 and clear the location constraints
crm resource migrate <res_name> <host>: Creates location constraints and can cause issues with
takeover
crm resource clear <res_name> : Clears location constraints
crm resource cleanup <res_name> : Clears all errors of the resource
Test the fencing agent
In this case, we test SBD fencing, which we recommend that you do when you use SUSE Linux.
Cluster node azibmdb01 should be rebooted. The IBM Db2 primary HADR role is going to be moved to
azibmdb02. When azibmdb01 is back online, the Db2 instance is going to move in the role of a secondary
database instance.
If the Pacemaker service doesn't start automatically on the rebooted former primary, be sure to start it manually
with:
status on azibmdb02
2 nodes configured
5 resources configured
Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]
After the failover, you can start the service again on azibmdb01.
Kill the Db2 process on the node that runs the HADR primary database
The Db2 instance is going to fail, and Pacemaker will report following status:
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms
Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's
running the secondary database instance and an error is reported.
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms
Kill the Db2 process on the node that runs the secondary database instance
azibmdb02:~ # kill -9
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms
The Db2 instance gets restarted in the secondary role it had assigned before.
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms
Stop DB via db2stop force on the node that runs the HADR primary database instance
2 nodes configured
5 resources configured
azibmdb01:~ # su - db2ptr
azibmdb01:db2ptr> db2stop force
Failure detected
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=201, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:25 2019', queued=1ms, exec=150ms
The Db2 HADR secondary database instance got promoted into the primary role
nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_start_0 on azibmdb01 'unknown error' (1): call=205, stat
us=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:27 2019', queued=0ms, exec=865ms
Crash VM with restart on the node that runs the HADR primary database instance
Pacemaker will promote the secondary instance to the primary instance role. The old primary instance will move
into the secondary role after the VM and all services are fully restored after the VM reboot:
nodes configured
5 resources configured
Crash the VM that runs the HADR primary database instance with "halt"
In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding.
2 nodes configured
5 resources configured
The next step is to check for a Split brain situation. After the surviving node has determined that the node that last
ran the primary database instance is down, a failover of resources is executed.
2 nodes configured
5 resources configured
Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]
In the event of a "halting" of the node, the failed node has to be restarted via Azure Management tools (in the
Azure portal, PowerShell, or the Azure CLI). After the failed node is back online, it starts the Db2 instance into the
secondary role.
2 nodes configured
5 resources configured
Next steps
High-availability architecture and scenarios for SAP NetWeaver
Set up Pacemaker on SUSE Linux Enterprise Server in Azure
High availability of IBM Db2 LUW on Azure VMs on
Red Hat Enterprise Linux Server
12/22/2020 • 24 minutes to read • Edit Online
IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery (HADR) configuration
consists of one node that runs a primary database instance and at least one node that runs a secondary database
instance. Changes to the primary database instance are replicated to a secondary database instance
synchronously or asynchronously, depending on your configuration.
NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework,
and install the IBM Db2 LUW with HADR configuration.
The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To
help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses
on parts that are specific to the Azure environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note 1928533.
Before you begin an installation, see the following SAP notes and documentation:
SA P N OT E DESC RIP T IO N
2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux,
UNIX, and Windows - additional information
SAP Community Wiki: Has all of the required SAP Notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux guide
Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide
Overview of the High Availability Add-On for Red Hat Enterprise Linux 7
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
Support Policy for RHEL High Availability Clusters - Management of IBM Db2 for Linux, Unix, and Windows in a Cluster
Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are
deployed in an Azure availability set or across Azure Availability Zones.
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have
their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has
the role of the primary instance. All clients are connected to primary instance. All changes in database transactions
are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally, the records are
transferred via TCP/IP to the database instance on the second database server, the standby server, or standby
instance. The standby instance updates the local database by rolling forward the transferred transaction log
records. In this way, the standby server is kept in sync with the primary server.
HADR is only a replication functionality. It has no failure detection and no automatic takeover or failover facilities.
A takeover or transfer to the standby server must be initiated manually by a database administrator. To achieve an
automatic takeover and failure detection, you can use the Linux Pacemaker clustering feature. Pacemaker monitors
the two database server instances. When the primary database server instance crashes, Pacemaker initiates an
automatic HADR takeover by the standby server. Pacemaker also ensures that the virtual IP address is assigned to
the new primary server.
To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP
address. In the event of a failover, the SAP application servers will connect to new primary database instance. In an
Azure environment, an Azure load balancer is required to use a virtual IP address in the way that's required for
HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a highly available SAP system
setup, the following image presents an overview of a highly available setup of an SAP system based on IBM Db2
database. This article covers only IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.
Plan Azure infrastructure for hosting IBM Db2 LUW with HADR
Complete the planning process before you execute the deployment. Planning builds the foundation for deploying
a configuration of Db2 with HADR in Azure. Key elements that need to be part of planning for IMB Db2 LUW
(database part of SAP environment) are listed in the following table:
TO P IC SH O RT DESC RIP T IO N
Define Azure resource groups Resource groups where you deploy VM, VNet, Azure Load
Balancer, and other resources. Can be existing or new.
Virtual network / Subnet definition Where VMs for IBM Db2 and Azure Load Balancer are being
deployed. Can be existing or newly created.
Virtual machines hosting IBM Db2 LUW VM size, storage, networking, IP address.
Virtual host name and virtual IP for IBM Db2 database The virtual IP or host name that's used for connection of SAP
application servers. db-vir t-hostname , db-vir t-ip .
Azure Load Balancer Usage of Basic or Standard (recommended), probe port for
Db2 database (our recommendation 62500) probe-por t .
Name resolution How name resolution works in the environment. DNS service
is highly recommended. Local hosts file can be used.
For more information about Linux Pacemaker in Azure, see Setting up Pacemaker on Red Hat Enterprise Linux in
Azure.
IMPORTANT
Write down the "Database Communication port" that's set during installation. It must be the same port number for both
database instances.
NOTE
Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up
and running before you can start the primary database instance.
NOTE
For installation and configuration that's specific to Azure and Pacemaker: During the installation procedure through SAP
Software Provisioning Manager, there is an explicit question about high availability for IBM Db2 LUW:
Do not select IBM Db2 pureScale .
Do not select Install IBM Tivoli System Automation for Multiplatforms .
Do not select Generate cluster configuration files .
To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these
steps:
1. Select the System copy option > Target systems > Distributed > Database instance .
2. As a copy method, select Homogeneous System so that you can use backup to restore a backup on the
standby server instance.
3. When you reach the exit step to restore the database for homogeneous system copy, exit the installer. Restore
the database from a backup of the primary host. All subsequent installation phases have already been executed
on the primary database server.
Red Hat firewall rules for DB2 HADR
Add firewall rules to allow traffic to DB2 and between DB2 for HADR to work:
Database communication port. If using partitions, add those ports too.
HADR port (value of DB2 parameter HADR_LOCAL_SVC)
Azure probe port
#Primary output:
Database Member 0 -- Database ID2 -- Active -- Up 1 days 15:45:23 -- Date 2019-06-25-10.55.25.349375
HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 1
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.076494 (1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 5
HEARTBEAT_EXPECTED = 52
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 5
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 369280
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 132242668
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 300
PEER_WINDOW_END = 06/25/2019 11:12:03.000000 (1561461123)
READS_ON_STANDBY_ENABLED = N
#Secondary output:
Database Member 0 -- Database ID2 -- Standby -- Up 1 days 15:45:18 -- Date 2019-06-25-10.56.19.820474
HADR_ROLE = STANDBY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 0
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.078116 (1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 0
HEARTBEAT_EXPECTED = 10
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 1
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 367360
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 0
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 1000
PEER_WINDOW_END = 06/25/2019 11:12:59.000000 (1561461179)
READS_ON_STANDBY_ENABLED = N
Pacemaker configuration
[1] IBM Db2 HADR-specific Pacemaker configuration:
# Replace bold strings with your instance name db2sid, database SID, and virtual IP address/Azure Load
Balancer.
sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2' dblist='ID2' master meta notify=true resource-
stickiness=5000
#Configure resource stickiness and correct cluster notifications for master resoruce
sudo pcs resource update Db2_HADR_ID2-master meta notify=true resource-stickiness=5000
#Create colocation constrain - keep Db2 HADR Master and Group on same node
sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master Db2_HADR_ID2-master
[1] Make sure that the cluster status is OK and that all of the resources are started. It's not important which node
the resources are running on.
2 nodes configured
5 resources configured
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
IMPORTANT
You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as
db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2
administration commands.
NOTE
The Standard Load Balancer SKU has restrictions accessing public IP addresses from the nodes underneath the Load
Balancer. The article Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-
availability scenarios is describing ways on how to enable those nodes to access public IP addresses
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname
/sapmnt/<SID>/global/db6/db2cli.ini
Hostname=db-virt-hostname
sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
5. Select Add .
6. To save your changes, select the disk icon at the upper left.
7. Close the configuration tool.
8. Restart the Java instance.
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Test takeover of IBM Db2
IMPORTANT
Before you start the test, make sure that:
Pacemaker doesn't have any failed actions (pcs status).
There are no location constraints (leftovers of migration test)
The IBM Db2 HADR synchronization is working. Check with user db2<sid>
Migrate the node that's running the primary Db2 database by executing following command:
After the migration is done, the crm status output looks like:
2 nodes configured
5 resources configured
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Resource migration with "pcs resource move" creates location constraints. Location constraints in this case are
preventing running IBM Db2 instance on az-idb01. If location constraints are not deleted, the resource cannot fail
back.
Remove the location constrain and standby node will be started on az-idb01.
2 nodes configured
5 resources configured
Migrate the resource back to az-idb01 and clear the location constraints
pcs resource move <res_name> : Creates location constraints and can cause issues with takeover
pcs resource clear <res_name> : Clears location constraints
pcs resource cleanup <res_name> : Clears all errors of the resource
Test a manual takeover
You can test a manual takeover by stopping the Pacemaker service on az-idb01 node:
status on az-ibdb02
2 nodes configured
5 resources configured
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
After the failover, you can start the service again on az-idb01.
Kill the Db2 process on the node that runs the HADR primary database
The Db2 instance is going to fail, and Pacemaker will move master node and report following status:
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=49, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 09:57:35 2019', queued=0ms, exec=362ms
Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's
running the secondary database instance and an error is reported.
Kill the Db2 process on the node that runs the secondary database instance
[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc
db2id2 23144 23142 2 09:53 ? 00:00:13 db2sysc 0
[sapadmin@az-idb02 ~]$ sudo kill -9 23144
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_monitor_20000 on az-idb02 'not running' (7): call=144, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 10:02:09 2019', queued=0ms, exec=0ms
The Db2 instance gets restarted in the secondary role it had assigned before.
Stop DB via db2stop force on the node that runs the HADR primary database instance
As user db2<sid> execute command db2stop force:
Failure detected:
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms
The Db2 HADR secondary database instance got promoted into the primary role.
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms
Crash the VM that runs the HADR primary database instance with "halt"
In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding.
2 nodes configured
5 resources configured
The next step is to check for a Split brain situation. After the surviving node has determined that the node that last
ran the primary database instance is down, a failover of resources is executed.
2 nodes configured
5 resources configured
Online: [ az-idb02 ]
OFFLINE: [ az-idb01 ]
2 nodes configured
5 resources configured
Next steps
High-availability architecture and scenarios for SAP NetWeaver
Setting up Pacemaker on Red Hat Enterprise Linux in Azure
SAP ASE Azure Virtual Machines DBMS deployment
for SAP workload
12/22/2020 • 16 minutes to read • Edit Online
In this document, covers several different areas to consider when deploying SAP ASE in Azure IaaS. As a
precondition to this document, you should have read the document Considerations for Azure Virtual Machines
DBMS deployment for SAP workload and other guides in the SAP workload on Azure documentation. This
document covers SAP ASE running on Linux and on Windows Operating Systems. The minimum supported
release on Azure is SAP ASE 16.0.02 (Release 16 Support Pack 2). It is recommended to deploy the latest version of
SAP and the latest Patch Level. As a minimum SAP ASE 16.0.03.07 (Release 16 Support Pack 3 Patch Level 7) is
recommended. The most recent version of SAP can be found in Targeted ASE 16.0 Release Schedule and CR list
Information.
Additional information about release support with SAP applications or installation media location are found,
besides in the SAP Product Availability Matrix in these locations:
SAP support note #2134316
SAP support note #1941500
SAP support note #1590719
SAP support note #1973241
Remark: Throughout documentation within and outside the SAP world, the name of the product is referenced as
Sybase ASE or SAP ASE or in some cases both. In order to stay consistent, we use the name SAP ASE in this
documentation.
The page size is typically 2048 KB. For details see the article Huge Pages on Linux
NOTE
If a DBMS system is being moved from on-premises to Azure, it is recommended to perform monitoring on the VM and
assess the CPU, memory, IOPS and storage throughput. Compare the peak values observed with the VM quota limits
documented in the articles mentioned above
The examples given below are for illustrative purposes and can be modified based on individual needs. Due to the
design of SAP ASE, the number of data devices is not as critical as with other databases. The number of data
devices detailed in this document is a guide only.
An example of a configuration for a small SAP ASE DB Server with a database size between 50 GB – 250 GB, such
as SAP solution Manager, could look like
C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S
Format block size needs workload testing needs workload testing ---
# and type of data disks Premium storage: 2 x P10 Premium storage: 2 x P10 Cache = Read Only
(RAID0) (RAID0)
# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE
ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance
C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S
An example of a configuration for a medium SAP ASE DB Server with a database size between 250 GB – 750 GB,
such as a smaller SAP Business Suite system, could look like
C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S
Format block size needs workload testing needs workload testing ---
# and type of data disks Premium storage: 4 x P20 Premium storage: 4 x P20 Cache = Read Only
(RAID0) (RAID0)
# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE
ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance
An example of a configuration for a small SAP ASE DB Server with a database size between 750 GB – 2000 GB,
such as a larger SAP Business Suite system, could look like
C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S
Format block size needs workload testing needs workload testing ---
# and type of data disks Premium storage: 4 x P30 Premium storage: 4 x P30 Cache = Read Only
(RAID0) (RAID0)
# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE
ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance
An example of a configuration for a small SAP ASE DB Server with a database size of 2 TB+, such as a larger
globally used SAP Business Suite system, could look like
C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S
Format block size needs workload testing needs workload testing ---
# and type of data disks Premium storage: 4+ x P30 Premium storage: 4+ x P30 Cache = Read Only,
(RAID0) (RAID0) Consider Azure Ultra disk
# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE, Consider
Azure Ultra disk
ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance
NOTE
The only supported configuration on Azure is using Fault Manager without Floating IP. The Floating IP Address method will
not work on Azure.
NOTE
If a SAP ASE database is encrypted then Backup Dump Compression will not work. See also SAP support note #2680905
icm/server_port_0 = PROT=HTTP,PORT=8000,PROCTIMEOUT=600,TIMEOUT=600
icm/server_port_1 = PROT=HTTPS,PORT=443$$,PROCTIMEOUT=600,TIMEOUT=600
https://<fullyqualifiedhostname>:44300/sap/bc/webdynpro/sap/dba_cockpit
http://<fullyqualifiedhostname>:8000/sap/bc/webdynpro/sap/dba_cockpit
Depending on how the Azure Virtual Machine hosting the SAP system is connected to your AD and DNS, you need
to make sure that ICM is using a fully qualified hostname that can be resolved on the machine where you are
opening the DBACockpit from. See SAP support note #773830 to understand how ICM determines the fully
qualified host name based on profile parameters and set parameter icm/host_name_full explicitly if necessary.
If you deployed the VM in a Cloud-Only scenario without cross-premises connectivity between on-premises and
Azure, you need to define a public IP address and a domainlabel . The format of the public DNS name of the VM
looks like:
https://mydomainlabel.westeurope.cloudapp.net:44300/sap/bc/webdynpro/sap/dba_cockpit
http://mydomainlabel.westeurope.cloudapp.net:8000/sap/bc/webdynpro/sap/dba_cockpit
Next steps
Check the article SAP workloads on Azure: planning and deployment checklist
SAP MaxDB, liveCache, and Content Server
deployment on Azure VMs
12/22/2020 • 10 minutes to read • Edit Online
This document covers several different areas to consider when deploying MaxDB, liveCache, and Content Server in
Azure IaaS. As a precondition to this document, you should have read the document Considerations for Azure
Virtual Machines DBMS deployment for SAP workload as well as other guides in the SAP workload on Azure
documentation.
IMPORTANT
Like other databases, SAP MaxDB also has data and log files. However, in SAP MaxDB terminology the correct term is
"volume" (not "file"). For example, there are SAP MaxDB data volumes and log volumes. Do not confuse these with OS disk
volumes.
Backup / Restore
If you configure the SAP Content Server to store files in the SAP MaxDB database, the backup/restore procedure
and performance considerations are already described in SAP MaxDB chapters of this document.
If you configure the SAP Content Server to store files in the file system, one option is to execute manual
backup/restore of the whole file structure where the documents are located. Similar to SAP MaxDB backup/restore,
it is recommended to have a dedicated disk volume for backup purpose.
Other
Other SAP Content Server-specific settings are transparent to Azure VMs and are described in various documents
and SAP Notes:
https://service.sap.com/contentserver
SAP Note 1619726
SAP HANA high availability for Azure virtual
machines
12/22/2020 • 2 minutes to read • Edit Online
You can use numerous Azure capabilities to deploy mission-critical databases like SAP HANA on Azure VMs. This
article provides guidance on how to achieve availability for SAP HANA instances that are hosted in Azure VMs.
The article describes several scenarios that you can implement by using the Azure infrastructure to increase
availability of SAP HANA in Azure.
Prerequisites
This article assumes that you are familiar with infrastructure as a service (IaaS) basics in Azure, including:
How to deploy virtual machines or virtual networks via the Azure portal or PowerShell.
Using the Azure cross-platform command-line interface (Azure CLI), including the option to use JavaScript
Object Notation (JSON) templates.
This article also assumes that you are familiar with installing SAP HANA instances, and with administrating and
operating SAP HANA instances. It's especially important to be familiar with the setup and operations of HANA
system replication. This includes tasks like backup and restore for SAP HANA databases.
These articles provide a good overview of using SAP HANA in Azure:
Manual installation of single-instance SAP HANA on Azure VMs
Set up SAP HANA system replication in Azure VMs
Back up SAP HANA on Azure VMs
It's also a good idea to be familiar with these articles about SAP HANA:
High availability for SAP HANA
FAQ: High availability for SAP HANA
Perform system replication for SAP HANA
SAP HANA 2.0 SPS 01 What’s new: High availability
Network recommendations for SAP HANA system replication
SAP HANA system replication
SAP HANA service auto-restart
Configure SAP HANA system replication
Beyond being familiar with deploying VMs in Azure, before you define your availability architecture in Azure, we
recommend that you read Manage the availability of Windows virtual machines in Azure.
Next steps
Learn about SAP HANA availability within one Azure region.
Learn about SAP HANA availability across Azure regions.
SAP HANA availability within one Azure region
12/22/2020 • 10 minutes to read • Edit Online
This article describes several availability scenarios within one Azure region. Azure has many regions, spread
throughout the world. For the list of Azure regions, see Azure regions. For deploying SAP HANA on VMs within
one Azure region, Microsoft offers deployment of a single VM with a HANA instance. For increased availability, you
can deploy two VMs with two HANA instances within an Azure availability set that uses HANA system replication
for availability.
Currently, Azure is offering Azure Availability Zones. This article does not describe Availability Zones in detail. But,
it includes a general discussion about using Availability Sets versus Availability Zones.
Azure regions where Availability Zones are offered have multiple datacenters. The datacenters are independent in
the supply of power source, cooling, and network. The reason for offering different zones within a single Azure
region is to deploy applications across two or three Availability Zones that are offered. Deploying across zones,
issues in power and networking affecting only one Azure Availability Zone infrastructure, your application
deployment within an Azure region is still functional. Some reduced capacity might occur. For example, VMs in one
zone might be lost, but VMs in the other two zones would still be up and running.
An Azure Availability Set is a logical grouping capability that helps ensure that the VM resources that you place
within the Availability Set are failure-isolated from each other when they are deployed within an Azure datacenter.
Azure ensures that the VMs you place within an Availability Set run across multiple physical servers, compute
racks, storage units, and network switches. In some Azure documentation, this configuration is referred to as
placements in different update and fault domains. These placements usually are within an Azure datacenter.
Assuming that power source and network issues would affect the datacenter that you are deploying, all your
capacity in one Azure region would be affected.
The placement of datacenters that represent Azure Availability Zones is a compromise between delivering
acceptable network latency between services deployed in different zones, and a distance between datacenters.
Natural catastrophes ideally wouldn't affect the power, network supply, and infrastructure for all Availability Zones
in this region. However, as monumental natural catastrophes have shown, Availability Zones might not always
provide the availability that you want within one region. Think about Hurricane Maria that hit the island of Puerto
Rico on September 20, 2017. The hurricane basically caused a nearly 100 percent blackout on the 90-mile-wide
island.
Single-VM scenario
In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use Azure Premium Storage to
host the operating system disk and all your data disks. The Azure uptime SLA of 99.9 percent and the SLAs of
other Azure components is sufficient for you to fulfill your availability SLAs for your customers. In this scenario,
you have no need to leverage an Azure Availability Set for VMs that run the DBMS layer. In this scenario, you rely
on two different features:
Azure VM auto-restart (also referred to as Azure service healing)
SAP HANA auto-restart
Azure VM auto restart, or service healing, is a functionality in Azure that works on two levels:
The Azure server host checks the health of a VM that's hosted on the server host.
The Azure fabric controller monitors the health and availability of the server host.
A health check functionality monitors the health of every VM that's hosted on an Azure server host. If a VM falls
into a non-healthy state, a reboot of the VM can be initiated by the Azure host agent that checks the health of the
VM. The fabric controller checks the health of the host by checking many different parameters that might indicate
issues with the host hardware. It also checks on the accessibility of the host via the network. An indication of
problems with the host can lead to the following events:
If the host signals a bad health state, a reboot of the host and a restart of the VMs that were running on the
host is triggered.
If the host is not in a healthy state after successful reboot, a redeployment of the VMs that were originally on
the now unhealthy node onto an healthy host server is initiated. In this case, the original host is marked as not
healthy. It won't be used for further deployments until it's cleared or replaced.
If the unhealthy host has problems during the reboot process, an immediate restart of the VMs on an healthy
host is triggered.
With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically
restarted on a healthy Azure host.
IMPORTANT
Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the
commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state.
Instead the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is
honoring that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such
occurrences are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the
default behavior enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds.
Common recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. See also
https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf.
The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM
starts automatically after the VM reboots. You can set up HANA service auto-restart through the watchdog
services of the various HANA services.
You might improve this single-VM scenario by adding a cold failover node to an SAP HANA configuration. In the
SAP HANA documentation, this setup is called host auto-failover. This configuration might make sense in an on-
premises deployment situation where the server hardware is limited, and you dedicate a single-server node as the
host auto-failover node for a set of production hosts. But in Azure, where the underlying infrastructure of Azure
provides a healthy target server for a successful VM restart, it doesn't make sense to deploy SAP HANA host auto-
failover. Because of Azure service healing, there is no reference architecture that foresees a standby node for
HANA host auto-failover.
Special case of SAP HANA scale -out configurations in Azure
High availability for SAP HANA scale-out configurations is relying on service healing of Azure VMs and the restart
of the SAP HANA instance as the VM is up and running again. High availability architectures based on HANA
System Replication are going to be introduced at a later time.
SAP HANA system replication without auto failover and with data preload
In this scenario, data that's replicated to the HANA instance in the second VM is preloaded. This eliminates the two
advantages of not preloading data. In this case, you can't run another SAP HANA system on the second VM. You
also can't use a smaller VM size. Hence, customers rarely implement this scenario.
SAP HANA system replication with automatic failover
In the standard and most common availability configuration within one Azure region, two Azure VMs running
SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the Pacemaker framework, in
conjunction with a STONITH device.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is
configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a
synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by
the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application
until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two
synchronous replication modes. For details and for a description of differences between these two synchronous
replication modes, see the SAP article Replication modes for SAP HANA system replication.
The overall configuration looks like:
You might choose this solution because it enables you to achieve an RPO=0 and an low RTO. Configure the SAP
HANA client connectivity so that the SAP HANA clients use the virtual IP address to connect to the HANA system
replication configuration. Such a configuration eliminates the need to reconfigure the application if a failover to
the secondary node occurs. In this scenario, the Azure VM SKUs for the primary and secondary VMs must be the
same.
Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
For more information about SAP HANA availability across Azure regions, see:
SAP HANA availability across Azure regions
SAP HANA availability across Azure regions
12/22/2020 • 5 minutes to read • Edit Online
This article describes scenarios related to SAP HANA availability across different Azure regions. Because of the
distance between Azure regions, setting up SAP HANA availability in multiple Azure regions involves special
considerations.
NOTE
In this configuration, you can't provide an RPO=0 because your HANA system replication mode is asynchronous. If you
need to provide an RPO=0, this configuration isn't the configuration of choice.
A small change that you can make in the configuration might be to configure data as preloading. However, given
the manual nature of failover and the fact that application layers also need to move to the second region, it might
not make sense to preload data.
If the organization has requirements for high availability readiness in the second(DR) Azure region, then the
architecture would look like:
Using logreplay as operation mode, this configuration provides an RPO=0, with low RTO, within the primary
region. The configuration also provides decent RPO if a move to the second region is involved. The RTO times in
the second region are dependent on whether data is preloaded. Many customers use the VM in the secondary
region to run a test system. In that use case, the data can't be preloaded.
IMPORTANT
The operation modes between the different tiers need to be homogeneous. You can't use logreply as operation mode
between tier 1 and tier 2 and delta_datashipping to supply tier 3. You can only choose the one or the other operation mode
that needs to be consistent for all tiers. Since delta_datashipping is not suitable to give you an RPO=0, the only reasonable
operation mode for such a multi-tier configuration remains logreplay. For details about operation modes and some
restrictions, see the SAP article Operation modes for SAP HANA system replication.
Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
SAP Business One on Azure Virtual Machines
12/22/2020 • 7 minutes to read • Edit Online
This document provides guidance to deploy SAP Business One on Azure Virtual Machines. The documentation is
not a replacement for installation documentation of Business one for SAP. The documentation should cover basic
planning and deployment guidelines for the Azure infrastructure to run Business One applications on.
Business One supports two different databases:
SQL Server - see SAP Note #928839 - Release Planning for Microsoft SQL Server
SAP HANA - for exact SAP Business One support matrix for SAP HANA, checkout the SAP Product Availability
Matrix
Regarding SQL Server, the basic deployment considerations as documented in the Azure Virtual Machines DBMS
deployment for SAP NetWeaver applies. for SAP HANA, considerations are mentioned in this document.
Prerequisites
To use this guide, you need basic knowledge of the following Azure components:
Azure virtual machines on Windows
Azure virtual machines on Linux
Azure networking and virtual networks management with PowerShell
Azure networking and virtual networks with CLI
Manage Azure disks with the Azure CLI
Even if you are interested in business One only, the document Azure Virtual Machines planning and
implementation for SAP NetWeaver can be a good source of information.
The assumption is that you as the instance deploying SAP Business One are:
Familiar with installing SAP HANA on a given infrastructure like a VM
Familiar installing the SAP Business One application on an infrastructure like Azure VMs
Familiar with operating SAP Business One and the DBMS system chosen
Familiar with deploying infrastructure in Azure
All these areas will not be covered in this document.
Besides Azure documentation you should be aware of main SAP Notes, which refer to Business One or which are
central Notes from SAP for business One:
528296 - General Overview Note for SAP Business One Releases and Related Products
2216195 - Release Updates Note for SAP Business One 9.2, version for SAP HANA
2483583 - Central Note for SAP Business One 9.3
2483615 - Release Updates Note for SAP Business One 9.3
2483595 - Collective Note for SAP Business One 9.3 General Issues
2027458 - Collective Consulting Note for SAP HANA-Related Topics of SAP Business One, version for SAP HANA
For cases where the users are connecting through the internet without any private connectivity into Azure, the
design of the network in Azure should be aligned with the principles documented in the Azure reference
architecture for DMZ between Azure and the Internet.
Business One database server
For the database type, SQL Server and SAP HANA are available. Independent of the DBMS, you should read the
document Considerations for Azure Virtual Machines DBMS deployment for SAP workload to get a general
understanding of DBMS deployments in Azure VMs and the related networking and storage topics.
Though emphasized in the specific and generic database documents already, you should make yourself familiar
with:
Manage the availability of Windows virtual machines in Azure and Manage the availability of Linux virtual
machines in Azure
SLA for Virtual Machines
These documents should help you to decide on the selection of storage types and high availability configuration.
In principle you should:
Use Premium SSDs over Standard HDDs. To learn more about the available disk types, see our article Select a
disk type
Use Azure Managed disks over unmanaged disks
Make sure that you have sufficient IOPS and I/O throughput configured with your disk configuration
Combine /hana/data and /hana/log volume in order to have a cost efficient storage configuration
SQL Server as DBMS
For deploying SQL Server as DBMS for Business One, go along the document SQL Server Azure Virtual Machines
DBMS deployment for SAP NetWeaver.
Rough sizing estimates for the DBMS side for SQL Server are:
N UM B ER O F USERS VC P US M EM O RY EXA M P L E VM T Y P ES
up to 20 4 16 GB D4s_v3, E4s_v3
up to 40 8 32 GB D8s_v3, E8s_v3
up to 80 16 64 GB D16s_v3, E16s_v3
The sizing listed above should give an idea where to start with. It may be that you need less or more resources, in
which case an adaption on Azure is easy. A change between VM types is possible with just a restart of the VM.
SAP HANA as DBMS
Using SAP HANA as DBMS the following sections you should follow the considerations of the document SAP HANA
on Azure operations guide.
For high availability and disaster recovery configurations around SAP HANA as database for Business One in Azure,
you should read the documentation SAP HANA high availability for Azure virtual machines and the documentation
pointed to from that document.
For SAP HANA backup and restore strategies, you should read the document Backup guide for SAP HANA on Azure
Virtual Machines and the documentation pointed to from that document.
Business One client server
For these components storage considerations are not the primary concern. nevertheless, you want to have a
reliable platform. Therefore, you should use Azure Premium Storage for this VM, even for the base VHD. Sizing the
VM, with the data given in SAP Business One Hardware Requirements Guide. For Azure, you need to focus and
calculate with the requirements stated in chapter 2.4 of the document. As you calculate the requirements, you need
to compare them against the following documents to find the ideal VM for you:
Sizes for Windows virtual machines in Azure
SAP Note #1928533
Compare number of CPUs and memory needed to what is documented by Microsoft. Also keep network
throughput in mind when choosing the VMs.
Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on
Azure
12/22/2020 • 4 minutes to read • Edit Online
This article describes how to deploy an SAP IDES system running with SQL Server and the Windows operating
system on Azure via the SAP Cloud Appliance Library (SAP CAL) 3.0. The screenshots show the step-by-step
process. To deploy a different solution, follow the same steps.
To start with the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the new SAP
Cloud Appliance Library 3.0.
NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.
If you already created an SAP CAL account that uses the classic model, you need to create another SAP CAL
account. This account needs to exclusively deploy into Azure by using the Resource Manager model.
After you sign in to the SAP CAL, the first page usually leads you to the Solutions page. The solutions offered on
the SAP CAL are steadily increasing, so you might need to scroll quite a bit to find the solution you want. The
highlighted Windows-based SAP IDES solution that is available exclusively on Azure demonstrates the deployment
process:
NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.
2. To create a new SAP CAL account, the Accounts page shows two choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.
3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize . The following
page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:
6. Click Accept . If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add .
8. To associate your account with the user that you use to sign in to the SAP CAL, click Review .
9. To create the association between your user and the newly created SAP CAL account, click Create .
NOTE
Before you can deploy the SAP IDES solution based on Windows and SQL Server, you might need to sign up for an SAP CAL
subscription. Otherwise, the solution might show up as Locked on the overview page.
Deploy a solution
1. After you set up an SAP CAL account, select The SAP IDES solution on Windows and SQL Ser ver
solution. Click Create Instance , and confirm the usage and terms conditions.
2. On the Basic Mode: Create Instance page, you need to:
a. Enter an instance Name .
b. Select an Azure Region . You might need an SAP CAL subscription to get multiple Azure regions offered.
c. Enter the master Password for the solution, as shown:
3. Click Create . After some time, depending on the size and complexity of the solution (the SAP CAL provides
an estimate), the status is shown as active and ready for use:
4. To find the resource group and all its objects that were created by the SAP CAL, go to the Azure portal. The
virtual machine can be found starting with the same instance name that was given in the SAP CAL.
5. On the SAP CAL portal, go to the deployed instances and click Connect . The following pop-up window
appears:
6. Before you can use one of the options to connect to the deployed systems, click Getting Star ted Guide .
The documentation names the users for each of the connectivity methods. The passwords for those users are
set to the master password you defined at the beginning of the deployment process. In the documentation,
other more functional users are listed with their passwords, which you can use to sign in to the deployed
system.
Within a few hours, a healthy SAP IDES system is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
SAP LaMa connector for Azure
12/22/2020 • 24 minutes to read • Edit Online
NOTE
General Support Statement: Please always open an incident with SAP on component BC-VCM-LVM-HYPERV if you need support for SAP LaMa or
the Azure connector.
SAP LaMa is used by many customers to operate and monitor their SAP landscape. Since SAP LaMa 3.0 SP05, it ships with a
connector to Azure by default. You can use this connector to deallocate and start virtual machines, copy and relocate managed disks,
and delete managed disks. With these basic operations, you can relocate, copy, clone, and refresh SAP systems using SAP LaMa.
This guide describes how you set up the Azure connector for SAP LaMa, create virtual machines that can be used to install adaptive
SAP systems and how to configure them.
NOTE
The connector is only available in the SAP LaMa Enterprise Edition
Resources
The following SAP Notes are related to the topic of SAP LaMa on Azure:
N OT E N UM B ER T IT L E
General remarks
Make sure to enable Automatic Mountpoint Creation in Setup -> Settings -> Engine
If SAP LaMa mounts volumes using the SAP Adaptive Extensions on a virtual machine, the mount point must exist if this setting
is not enabled.
Use separate subnet and don't use dynamic IP addresses to prevent IP address "stealing" when deploying new VMs and SAP
instances are unprepared
If you use dynamic IP address allocation in the subnet, which is also used by SAP LaMa, preparing an SAP system with SAP
LaMa might fail. If an SAP system is unprepared, the IP addresses are not reserved and might get allocated to other virtual
machines.
If you sign in to managed hosts, make sure to not block file systems from being unmounted
If you sign in to a Linux virtual machines and change the working directory to a directory in a mount point, for example
/usr/sap/AH1/ASCS00/exe, the volume cannot be unmounted and a relocate or unprepare fails.
Make sure to disable CLOUD_NETCONFIG_MANAGE on SUSE SLES Linux virtual machines. For more details, see SUSE KB
7023633.
NOTE
If possible, remove all virtual machine extensions as they might cause long runtimes for detaching disks from a virtual machine.
Make sure that user <hanasid>adm, <sapsid>adm and group sapsys exist on the target machine with the same ID and gid or use
LDAP. Enable and start the NFS server on the virtual machines that should be used to run the SAP NetWeaver (A)SCS.
Manual Deployment
SAP LaMa communicates with the virtual machine using the SAP Host Agent. If you deploy the virtual machines manually or not
using the Azure Resource Manager template from the quickstart repository, make sure to install the latest SAP Host Agent and the
SAP Adaptive Extensions. For more information about the required patch levels for Azure, see SAP Note 2343511.
Manual deployment of a Linux Virtual Machine
Create a new virtual machine with one of the supported operation systems listed in SAP Note 2343511. Add additional IP
configurations for the SAP instances. Each instance needs at least on IP address and must be installed using a virtual hostname.
The SAP NetWeaver ASCS instance needs disks for /sapmnt/<SAPSID>, /usr/sap/<SAPSID>, /usr/sap/trans, and
/usr/sap/<sapsid>adm. The SAP NetWeaver application servers do not need additional disks. Everything related to the SAP instance
must be stored on the ASCS and exported via NFS. Otherwise, it is currently not possible to add additional application servers using
SAP LaMa.
Manual deployment for SAP HANA
Create a new virtual machine with one of the supported operation systems for SAP HANA as listed in SAP Note 2343511. Add one
additional IP configuration for SAP HANA and one per HANA tenant.
SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log
Make sure to install a supported Microsoft ODBC driver for SQL Server on a virtual machine that you want to use to relocate an SAP
NetWeaver application server to or as a system copy/clone target.
SAP LaMa cannot relocate SQL Server itself so a virtual machine that you want to use to relocate a database instance to or as a
system copy/clone target needs SQL Server preinstalled.
Deploy Virtual Machine Using an Azure Template
Download the following latest available archives from the SAP Software Marketplace for the operating system of the virtual machines:
1. SAPCAR 7.21
2. SAP HOST AGENT 7.21
3. SAP ADAPTIVE EXTENSION 1.0 EXT
Also download the following components from the Microsoft Download Center
1. Microsoft Visual C++ 2010 Redistributable Package (x64) (Windows only)
2. Microsoft ODBC Driver for SQL Server (SQL Server only)
The components are required to deploy the template. The easiest way to make them available to the template is to upload them to an
Azure storage account and create a Shared Access Signature (SAS).
The templates have the following parameters:
sapSystemId: The SAP system ID. It is used to create the disk layout (for example /usr/sap/<sapsid>).
computerName: The computer name of the new virtual machine. This parameter is also used by SAP LaMa. When you use this
template to provision a new virtual machine as part of a system copy, SAP LaMa waits until the host with this computer name
can be reached.
osType: The type of the operating system you want to deploy.
dbtype: The type of the database. This parameter is used to determine how many additional IP configurations need to be added
and how the disk layout should look like.
sapSystemSize: The size of the SAP System you want to deploy. It is used to determine the virtual machine instance type and
size.
adminUsername: Username for the virtual machine.
adminPassword: Password for the virtual machine. You can also provide a public key for SSH.
sshKeyData: Public SSH key for the virtual machines. Only supported for Linux operating systems.
subnetId: The ID of the subnet you want to use.
deployEmptyTarget: You can deploy an empty target if you want to use the virtual machine as a target for an instance relocate
or similar. In this case, no additional disks or IP configurations are attached.
sapcarLocation: The location for the sapcar application that matches the operating system you deploy. sapcar is used to extract
the archives you provide in other parameters.
sapHostAgentArchiveLocation: The location of the SAP Host Agent archive. SAP Host Agent is deployed as part of this template
deployment.
sapacExtLocation: The location of the SAP Adaptive Extensions. SAP Note 2343511 lists the minimum patch level required for
Azure.
vcRedistLocation: The location of the VC Runtime that is required to install the SAP Adaptive Extensions. This parameter is only
required for Windows.
odbcDriverLocation: The location of the ODBC driver you want to install. Only Microsoft ODBC driver for SQL Server is
supported.
sapadmPassword: The password for the sapadm user.
sapadmId: The Linux User ID of the sapadm user. Not required for Windows.
sapsysGid: The Linux group ID of the sapsys group. Not required for Windows.
_artifactsLocation: The base URI, where artifacts required by this template are located. When the template is deployed using the
accompanying scripts, a private location in the subscription will be used and this value will be automatically generated. Only
needed if you do not deploy the template from GitHub.
_artifactsLocationSasToken: The sasToken required to access _artifactsLocation. When the template is deployed using the
accompanying scripts, a sasToken will be automatically generated. Only needed if you do not deploy the template from GitHub.
SAP HANA
In the examples below, we assume that you install SAP HANA with system ID HN1 and the SAP NetWeaver system with system ID
AH1. The virtual hostnames are hn1-db for the HANA instance, ah1-db for the HANA tenant used by the SAP NetWeaver system, ah1-
ascs for the SAP NetWeaver ASCS and ah1-di-0 for the first SAP NetWeaver application server.
Install SAP NetWeaver ASCS for SAP HANA using Azure Managed Disks
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the ASCS.
The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a
reboot.
Linux
# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-ascs -n 255.255.255.128
Windows
Run SWPM and use ah1-ascs for the ASCS Instance Host Name.
Linux
Add the following profile parameter to the SAP Host Agent profile, which is located at /usr/sap/hostctrl/exe/host_profile. For more
information, see SAP Note 2628497.
acosprep/nfs_paths=/home/ah1adm,/usr/sap/trans,/sapmnt/AH1,/usr/sap/AH1
Install SAP NetWeaver ASCS for SAP HANA on Azure NetAppFiles (ANF) BETA
NOTE
This functionality is nor GA yet. For more information refer to SAP Note 2815988 (only visible to preview customers). Open an SAP incident on
component BC-VCM-LVM-HYPERV and request to join the LaMa storage adapter for Azure NetApp Files preview
ANF provides NFS for Azure. In the context of SAP LaMa this simplifies the creation of the ABAP Central Services (ASCS) instances and
the subsequent installation of application servers. Previously the ASCS instance had to act as NFS server as well and the parameter
acosprep/nfs_paths had to be added to the host_profile of the SAP Hostagent.
ANF is currently available in these regions:
Australia East, Central US, East US, East US 2, North Europe, South Central US, West Europe and West US 2.
Network Requirements
ANF requires a delegated subnet which must be part of the same VNET as the SAP servers. Here’s an example for such a
configuration. This screen shows the creation of the VNET and the first subnet:
The next step creates the delegated subnet for Microsoft.NetApp/volumes.
Now a NetApp account needs to be created within the Azure portal:
Within the NetApp account the capacity pool specifies the size and type of disks for each pool:
The NFS volumes can now be defined. Since there will be volumes for multiple systems in one pool, a self-explaining naming scheme
should be chosen. Adding the SID helps to group related volumes together. For the ASCS and the AS instance the following mounts
are needed: /sapmnt/<SID>, /usr/sap/<SID>, and /home/<sid>adm. Optionally, /usr/sap/trans is needed for the central transport
directory, which is at least used by all systems of one landscape.
NOTE
During the BETA phase the name of the volumes must be unique within the subscription.
These steps need to be repeated for the other volumes as well.
Now these volumes need to be mounted to the systems where the initial installation with the SAP SWPM will be performed.
First the mount points need to be created. In this case the SID is AN1 so the following commands need to be executed:
mkdir -p /home/an1adm
mkdir -p /sapmnt/AN1
mkdir -p /usr/sap/AN1
mkdir -p /usr/sap/trans
Next the ANF volumes will be mounted with the following commands:
The mount commands can also be derived from the portal. The local mount points need to adjusted.
Use the df -h command to verify.
(This is an example. The IP addresses and export path are different from the ones used before)
Install SAP HANA
If you install SAP HANA using the commandline tool hdblcm, use parameter --hostname to provide a virtual hostname. You need to
add the IP address of the virtual hostname of the database to a network interface. The recommended way is to use sapacext. If you
mount the IP address using sapacext, make sure to remount the IP address after a reboot.
Add another virtual hostname and IP address for the name that is used by the application servers to connect to the HANA tenant.
# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h hn1-db -n 255.255.255.128
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-db -n 255.255.255.128
Run the database instance installation of SWPM on the application server virtual machine, not on the HANA virtual machine. Use ah1-
db for the Database Host in dialog Database for SAP System.
Install SAP NetWeaver Application Server for SAP HANA
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the
application server. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP
address after a reboot.
Linux
Windows
It is recommended to use SAP NetWeaver profile parameter dbs/hdb/hdb_use_ident to set the identity that is used to find the key in
the HDB userstore. You can add this parameter manually after the database instance installation with SWPM or run SWPM with
# from https://blogs.sap.com/2015/04/14/sap-hana-client-software-different-ways-to-set-the-connectivity-data/
/sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_USE_IDENT=SYSTEM_COO
If you set it manually, you also need to create new HDB userstore entries.
# run as <sapsid>adm
/usr/sap/AH1/hdbclient/hdbuserstore LIST
# reuse the port that was listed from the command above, in this example 35041
/usr/sap/AH1/hdbclient/hdbuserstore SET DEFAULT ah1-db:35041@AH1 SAPABAP1 <password>
Use ah1-di-0 for the PAS Instance Host Name in dialog Primary Application Server Instance.
Post-Installation Steps for SAP HANA
Make sure to back up the SYSTEMDB and all tenant databases before you try to do a tenant copy, tenant move or create a system
replication.
Microsoft SQL Server
In the examples below, we assume that you install the SAP NetWeaver system with system ID AS1. The virtual hostnames are as1-db
for the SQL Server instance used by the SAP NetWeaver system, as1-ascs for the SAP NetWeaver ASCS and as1-di-0 for the first SAP
NetWeaver application server.
Install SAP NetWeaver ASCS for SQL Server
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the ASCS.
The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a
reboot.
Run SWPM and use as1-ascs for the ASCS Instance Host Name.
Install SQL Server
You need to add the IP address of the virtual hostname of the database to a network interface. The recommended way is to use
sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet
mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-db -n 255.255.255.128
Run the database instance installation of SWPM on the SQL server virtual machine. Use SAPINST_USE_HOSTNAME=as1-db to
override the hostname used to connect to SQL Server. If you deployed the virtual machine using the Azure Resource Manager
template, make sure to set the directory used for the database data files to C:\sql\data and database log file to C:\sql\log.
Make sure that the user NT AUTHORITY\SYSTEM has access to the SQL Server and has the server role sysadmin. For more
information, see SAP Note 1877727 and 2562184.
Install SAP NetWeaver Application Server
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the
application server. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP
address after a reboot.
Use as1-di-0 for the PAS Instance Host Name in dialog Primary Application Server Instance.
Troubleshooting
Errors and Warnings during Discover
The SELECT permission was denied
[Microsoft][ODBC SQL Server Driver][SQL Server]The SELECT permission was denied on the object
'log_shipping_primary_databases', database 'msdb', schema 'dbo'. [SOAPFaultException]
The SELECT permission was denied on the object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'.
Solution
Make sure that NT AUTHORITY\SYSTEM can access the SQL Server. See SAP Note 2562184
Errors and Warnings for Instance Validation
An exception was raised in validation of the HDB userstore
see Log Viewer
com.sap.nw.lm.aci.monitor.api.validation.RuntimeValidationException: Exception in validator with ID
'RuntimeHDBConnectionValidator' (Validation: 'VALIDATION_HDB_USERSTORE'): Could not retrieve the hdbuserstore
HANA userstore is not in the correct location
Solution
Make sure that /usr/sap/AH1/hdbclient/install/installation.ini is correct
Errors and Warnings during a System Copy
An error occurred when validating the system provisioning step
Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception.HAOperationException Calling '/usr/sap/hostctrl/exe/sapacext -a
ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T
dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o
level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r
Solution
Take backup of all databases in source HANA system
System Copy Step Start of database instance
Host Agent Operation '000D3A282BC91EE8A1D76CF1F92E2944' failed (OperationException. FaultCode: '127', Message:
'Command execution failed. : [Microsoft][ODBC SQL Server Driver][SQL Server]User does not have permission to alter
database 'AS2', the database does not exist, or the database is not in a state that allows access checks.')
Solution
Make sure that NT AUTHORITY\SYSTEM can access the SQL Server. See SAP Note 2562184
Errors and Warnings during a System Clone
Error occurred when trying to register instance agent in step Forced Register and Start Instance Agent of application server or
ASCS
Error occurred when trying to register instance agent. (RemoteException: 'Failed to load instance data from profile '\as1-
ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': Cannot access profile '\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-
di-0': No such file or directory.')
Solution
Make sure that the sapmnt share on the ASCS/SCS has Full Access for SAP_AS1_GlobalAdmin
Error in step Enable Startup Protection for Clone
Failed to open file '\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0' Cause: No such file or directory
Solution
The computer account of the application server needs write access to the profile
Errors and Warnings during Create System Replication
Exception when clicking on Create System Replication
Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception.HAOperationException Calling '/usr/sap/hostctrl/exe/sapacext -a
ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T
dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o
level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r
Solution
Test if sapacext can be executed as <hanasid >adm
Error when full copy is not enabled in Storage Step
An error occurred when reporting a context attribute message for path IStorageCopyData.storageVolumeCopyList:1 and
field targetStorageSystemId
Solution
Ignore Warnings in step and try again. This issue will be fixed in a new support package/patch of SAP LaMa.
Errors and Warnings during Relocate
Path '/usr/sap/AH1' is not allowed for nfs reexports.
Check SAP Note 2628497 for details.
Solution
Add ASCS exports to ASCS HostAgent Profile. See SAP Note 2628497
Function not implemented when relocating ASCS
Command Output: exportfs: host:/usr/sap/AX1: Function not implemented
Solution
Make sure that the NFS server service is enabled on the relocate target virtual machine
Errors and Warnings during Application Server Installation
Error executing SAPinst step: getProfileDir
ERROR: (Last error reported by the step: Caught ESAPinstException in module call: Validator of step
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_readProfileDir|ind|ind|ind|ind|readProfile|0|getProfileDir'
reported an error: Node \\as1-ascs\sapmnt\AS1\SYS\profile does not exist. Start SAPinst in interactive mode to solve this
problem)
Solution
Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application
Server Installation wizard
Error executing SAPinst step: askUnicode
ERROR: (Last error reported by the step: Caught ESAPinstException in module call: Validator of step
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_getUnicode|ind|ind|ind|ind|unicode|0|askUnicode'
reported an error: Start SAPinst in interactive mode to solve this problem)
Solution
If you use a recent SAP kernel, SWPM cannot determine whether the system is a unicode system anymore using the
message server of the ASCS. See SAP Note 2445033 for more details.
This issue will be fixed in a new support package/patch of SAP LaMa.
Set profile parameter OS_UNICODE=uc in the default profile of your SAP system to work around this issue.
Error executing SAPinst step: dCheckGivenServer
Error executing SAPinst step: dCheckGivenServer" version="1.0" ERROR: (Last error reported by the step: <p> Installation
was canceled by user. </p>
Solution
Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application
Server Installation wizard
Error executing SAPinst step: checkClient
Error executing SAPinst step: checkClient" version="1.0" ERROR: (Last error reported by the step: <p> Installation was
canceled by user. </p>)
Solution
Make sure that the Microsoft ODBC driver for SQL Server is installed on the virtual machine on which you want to install
the application server
Error executing SAPinst step: copyScripts
Last error reported by the step: System call failed. DETAILS: Error 13 (0x0000000d) (Permission denied) in execution of
system call 'fopenU' with parameter (\\as1-ascs/sapmnt/AS1/SYS/exe/uc/NTAMD64/strdbs.cmd, w), line (494) in file
(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/filesystem/syxxcfstrm2.cpp), stack trace:
CThrThread.cpp: 85: CThrThread::threadFunction()
CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
CSiStepExecute.cpp: 913: CSiStepExecute::execute()
EJSController.cpp: 179: EJSControllerImpl::executeScript()
JSExtension.hpp: 1136: CallFunctionBase::call()
iaxxcfile.cpp: 183: iastring CIaOsFileConnect::callMemberFunction(iastring const& name, args_t const& args)
iaxxcfile.cpp: 1849: iastring CIaOsFileConnect::newFileStream(args_t const& _args)
iaxxbfile.cpp: 773: CIaOsFile::newFileStream_impl(4)
syxxcfile.cpp: 233: CSyFileImpl::openStream(ISyFile::eFileOpenMode)
syxxcfstrm.cpp: 29: CSyFileStreamImpl::CSyFileStreamImpl(CSyFileStream*,iastring,ISyFile::eFileOpenMode)
syxxcfstrm.cpp: 265: CSyFileStreamImpl::open()
syxxcfstrm2.cpp: 58: CSyFileStream2Impl::CSyFileStream2Impl(const CSyPath & \\aw1-
ascs/sapmnt/AW1/SYS/exe/uc/NTAMD64/strdbs.cmd, 0x4)
syxxcfstrm2.cpp: 456: CSyFileStream2Impl::open()
Solution
Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application
Server Installation wizard
Error executing SAPinst step: askPasswords
Last error reported by the step: System call failed. DETAILS: Error 5 (0x00000005) (Access is denied.) in execution of system
call 'NetValidatePasswordPolicy' with parameter (...), line (359) in file
(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/account/synxcaccmg.cpp), stack trace:
CThrThread.cpp: 85: CThrThread::threadFunction()
CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
CSiStepExecute.cpp: 913: CSiStepExecute::execute()
EJSController.cpp: 179: EJSControllerImpl::executeScript()
JSExtension.hpp: 1136: CallFunctionBase::call()
CSiStepExecute.cpp: 764: CSiStepExecute::invokeDialog()
DarkModeGuiEngine.cpp: 56: DarkModeGuiEngine::showDialogCalledByJs()
DarkModeDialog.cpp: 85: DarkModeDialog::submit()
EJSController.cpp: 179: EJSControllerImpl::executeScript()
JSExtension.hpp: 1136: CallFunctionBase::call()
iaxxcaccount.cpp: 107: iastring CIaOsAccountConnect::callMemberFunction(iastring const& name, args_t const& args)
iaxxcaccount.cpp: 1186: iastring CIaOsAccountConnect::validatePasswordPolicy(args_t const& _args)
iaxxbaccount.cpp: 430: CIaOsAccount::validatePasswordPolicy_impl()
synxcaccmg.cpp: 297: ISyAccountMgt::PasswordValidationMessage
CSyAccountMgtImpl::validatePasswordPolicy(saponazure,*****) const )
Solution
Make sure to add a Host rule in step Isolation to allow communication from the VM to the domain controller
Next steps
SAP HANA on Azure operations guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
Azure Virtual Machines high availability for SAP
NetWeaver
12/22/2020 • 2 minutes to read • Edit Online
Azure Virtual Machines is the solution for organizations that need compute, storage, and network resources, in
minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classic
applications such as SAP NetWeaver-based ABAP, Java, and an ABAP+Java stack. Extend reliability and availability
without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so you
can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and SAP
system landscape.
This series of articles covers:
Architecture and scenarios.
Infrastructure preparation.
SAP installation steps for deploying high-availability SAP systems in Azure by using the Azure Resource
Manager deployment model.
IMPORTANT
We strongly recommend that you use the Azure Resource Manager deployment model for your SAP installations. It
offers many benefits that are not available in the classic deployment model. Learn more about Azure deployment
models.
Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster
framework for SAP ASCS/SCS instances
Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster
framework for SAP ASCS/SCS instances with Azure NetApp files
Prepare Azure infrastructure for SAP ASCS/SCS high availability - set up GlusterFS on RHEL
Prepare Azure infrastructure for SAP ASCS/SCS high availability - set up Pacemaker on RHEL
Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for
SAP ASCS/SCS instances
Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for
SAP ASCS/SCS instances with Azure NetApp Files
Install SAP NetWeaver ASCS/SCS in high availability configuration on RHEL with Azure NetApp Files
High-availability architecture and scenarios for SAP
NetWeaver
12/22/2020 • 11 minutes to read • Edit Online
Terminology definitions
High availability : Refers to a set of technologies that minimize IT disruptions by providing business continuity
of IT services through redundant, fault-tolerant, or failover-protected components inside the same data center. In
our case, the data center resides within one Azure region.
Disaster recover y : Also refers to the minimizing of IT services disruption and their recovery, but across various
data centers that might be hundreds of miles away from one another. In our case, the data centers might reside in
various Azure regions within the same geopolitical region or in locations as established by you as a customer.
You usually don't need a specific high-availability solution for the SAP application server and dialog instances.
You achieve high availability by redundancy, and you configure multiple dialog instances in various instances of
Azure virtual machines. You should have at least two SAP application instances installed in two instances of Azure
virtual machines.
IMPORTANT
We strongly recommend that you use Azure managed disks for your SAP high-availability installations. Because managed
disks automatically align with the availability set of the virtual machine they are attached to, they increase the availability of
your virtual machine and the services that are running on it.
Windows
You can use a WSFC solution to protect the SAP ASCS/SCS instance. The solution has two variants:
Cluster the SAP ASCS/SCS instance by using clustered shared disks : For more information about
this architecture, see Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster
shared disk.
Cluster the SAP ASCS/SCS instance by using file share : For more information about this
architecture, see Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share.
Cluster the SAP ASCS/SCS instance by using ANF SMB share : For more information about this
architecture, see Cluster Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using ANF
SMB file share.
High-availability architecture for an SAP ASCS/SCS instance on Linux
Linux
For more information about clustering the SAP ASCS/SCS instance by using the SLES cluster framework, see
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications. For
alternative HA architecture on SLES, which doesn't require highly available NFS see High-availability guide
for SAP NetWeaver on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications.
For more information about clustering the SAP ASCS/SCS instance by using the Red Hat cluster framework, see
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux
SAP NetWeaver multi-SID configuration for a clustered SAP ASCS/SCS instance
Windows
Multi-SID is supported with WSFC, using file share and shared disk.
For more information about multi-SID high-availability architecture on Windows, see:
SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and file share
SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and shared
disk
Linux
Multi-SID clustering is supported on Linux Pacemaker clusters for SAP ASCS/ERS, limited to five SAP SIDs on
the same cluster. For more information about multi-SID high-availability architecture on Linux, see:
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
High-availability DBMS instance
The DBMS also is a single point of contact in an SAP system. You need to protect it by using a high-availability
solution. The following figure shows a SQL Server AlwaysOn high-availability solution in Azure, with Windows
Server Failover Clustering and the Azure internal load balancer. SQL Server AlwaysOn replicates DBMS data and
log files by using its own DBMS replication. In this case, you don't need cluster shared disk, which simplifies the
entire setup.
Figure 3: Example of a high-availability SAP DBMS, with SQL Server AlwaysOn
For more information about clustering SQL Server DBMS in Azure by using the Azure Resource Manager
deployment model, see these articles:
Configure an AlwaysOn availability group in Azure virtual machines manually by using Resource Manager
Configure an Azure internal load balancer for an AlwaysOn availability group in Azure
For more information about clustering SAP HANA DBMS in Azure by using the Azure Resource Manager
deployment model, see High availability of SAP HANA on Azure virtual machines (VMs).
Utilize Azure infrastructure VM restart to achieve
“higher availability” of an SAP system
12/22/2020 • 5 minutes to read • Edit Online
If you decide not to use functionalities such as Windows Server Failover Clustering (WSFC) or Pacemaker on Linux
(currently supported only for SUSE Linux Enterprise Server [SLES] 12 and later), Azure VM restart is utilized. It
protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and
overall underlying Azure platform.
NOTE
Azure VM restart primarily protects VMs and not applications. Although VM restart doesn't offer high availability for SAP
applications, it does offer a certain level of infrastructure availability. It also indirectly offers “higher availability” of SAP
systems. There is also no SLA for the time it takes to restart a VM after a planned or unplanned host outage, which makes
this method of high availability unsuitable for the critical components of an SAP system. Examples of critical components
might be an ASCS/SCS instance or a database management system (DBMS).
Another important infrastructure element for high availability is storage. For example, the Azure Storage SLA is
99.9% availability. If you deploy all VMs and their disks in a single Azure storage account, potential Azure Storage
unavailability will cause the unavailability of all VMs that are placed in that storage account and all SAP
components that are running inside of the VMs.
Instead of putting all VMs into a single Azure storage account, you can use dedicated storage accounts for each VM.
By using multiple independent Azure storage accounts, you increase overall VM and SAP application availability.
Azure managed disks are automatically placed in the fault domain of the virtual machine they are attached to. If
you place two virtual machines in an availability set and use managed disks, the platform takes care of distributing
the managed disks into different fault domains as well. If you plan to use a premium storage account, we highly
recommend using managed disks.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high availability and storage
accounts might look like this:
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high availability and managed
disks might look like this:
For critical SAP components, you have achieved the following so far:
High availability of SAP application servers
SAP application server instances are redundant components. Each SAP application server instance is
deployed on its own VM, which is running in a different Azure fault and upgrade domain. For more
information, see the Fault domains and Upgrade domains sections.
You can ensure this configuration by using Azure availability sets. For more information, see the Azure
availability sets section.
Potential planned or unplanned unavailability of an Azure fault or upgrade domain will cause unavailability
of a restricted number of VMs with their SAP application server instances.
Each SAP application server instance is placed in its own Azure storage account. The potential unavailability
of one Azure storage account will cause the unavailability of only one VM with its SAP application server
instance. However, be aware that there is a limit on the number of Azure storage accounts within one Azure
subscription. To ensure automatic start of an ASCS/SCS instance after the VM reboot, set the Autostart
parameter in the ASCS/SCS instance start profile that is described in the Using Autostart for SAP instances
section.
For more information, see High availability for SAP application servers.
Even if you use managed disks, the disks are stored in an Azure storage account and might be unavailable in
the event of a storage outage.
Higher availability of SAP ASCS/SCS instances
In this scenario, utilize Azure VM restart to protect the VM with the installed SAP ASCS/SCS instance. In the
case of planned or unplanned downtime of Azure servers, VMs are restarted on another available server. As
mentioned earlier, Azure VM restart primarily protects VMs and not applications, in this case the ASCS/SCS
instance. Through the VM restart, you indirectly reach “higher availability” of the SAP ASCS/SCS instance.
To ensure an automatic start of ASCS/SCS instance after the VM reboot, set the Autostart parameter in the
ASCS/SCS instance start profile, as described in the Using Autostart for SAP instances section. This setting
means that the ASCS/SCS instance as a single point of failure (SPOF) running in a single VM will determine
the availability of the whole SAP landscape.
Higher availability of the DBMS server
As in the preceding SAP ASCS/SCS instance use case, you utilize Azure VM restart to protect the VM with
installed DBMS software, and you achieve “higher availability” of DBMS software through VM restart.
A DBMS that's running in a single VM is also a SPOF, and it is the determinative factor for the availability of
the whole SAP landscape.
NOTE
The Autostart parameter has certain shortcomings as well. Specifically, the parameter triggers the start of an SAP ABAP or
Java instance when the related Windows or Linux service of the instance is started. That sequence occurs when the operating
system boots up. However, restarts of SAP services are also a common occurrence for SAP Software Lifecycle Management
functionality such as Software Update Manger (SUM) or other updates or upgrades. These functionalities are not expecting
an instance to be restarted automatically. Therefore, the Autostart parameter should be disabled before you run such tasks.
The Autostart parameter also should not be used for SAP instances that are clustered, such as ASCS/SCS/CI.
For more information about Autostart for SAP instances, see the following articles:
Start or stop SAP along with your Unix Server Start/Stop
Starting and stopping SAP NetWeaver management agents
Next steps
For information about full SAP NetWeaver application-aware high availability, see SAP application high availability
on Azure IaaS.
SAP workload configurations with Azure Availability
Zones
12/22/2020 • 15 minutes to read • Edit Online
Azure Availability Zones is one of the high-availability features that Azure provides. Using Availability Zones
improves the overall availability of SAP workloads on Azure. This feature is already available in some Azure
regions. In the future, it will be available in more regions.
This graphic shows the basic architecture of SAP high availability:
The SAP application layer is deployed across one Azure availability set. For high availability of SAP Central
Services, you can deploy two VMs in a separate availability set. Use Windows Server Failover Clustering or
Pacemaker (Linux) as a high-availability framework with automatic failover in case of an infrastructure or software
problem. To learn more about these deployments, see:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux
A similar architecture applies for the DBMS layer of SAP NetWeaver, S/4HANA, or Hybris systems. You deploy the
DBMS layer in an active/passive mode with a failover cluster solution to protect from infrastructure or software
failure. The failover cluster solution could be a DBMS-specific failover framework, Windows Server Failover
Clustering, or Pacemaker.
To deploy the same architecture by using Azure Availability Zones, you need to make some changes to the
architecture outlined earlier. This article describes these changes.
IMPORTANT
The measurements and decisions you make are valid for the Azure subscription you used when you took the measurements.
If you use another Azure subscription, you need to repeat the measurements. The mapping of enumerated zones might be
different for another Azure subscription.
IMPORTANT
It's expected that the measurements described earlier will provide different results in every Azure region that supports
Availability Zones. Even if your network latency requirements are the same, you might need to adopt different deployment
strategies in different Azure regions because the network latency between zones can be different. In some Azure regions, the
network latency among the three different zones can be vastly different. In other regions, the network latency among the
three different zones might be more uniform. The claim that there is always a network latency between 1 and 2 milliseconds
is not correct. The network latency across Availability Zones in Azure regions can't be generalized.
Active/Active deployment
This deployment architecture is called active/active because you deploy your active SAP application servers across
two or three zones. The SAP Central Services instance that uses enqueue replication will be deployed between two
zones. The same is true for the DBMS layer, which will be deployed across the same zones as SAP Central Service.
When considering this configuration, you need to find the two Availability Zones in your region that offer cross-
zone network latency that's acceptable for your workload and your synchronous DBMS replication. You also want
to be sure the delta between network latency within the zones you selected and the cross-zone network latency
isn't too large. This is because you don't want large variations, depending on whether a job runs in-zone with the
DBMS server or across zones, in the running times of your business processes or batch jobs. Some variations are
acceptable, but not factors of difference.
A simplified schema of an active/active deployment across two zones could look like this:
IMPORTANT
In this active/active scenario additional charges for bandwidth are announced by Microsoft from 04/01/2020 on. Check the
document Bandwidth Pricing Details. The data transfer between the SAP application layer and SAP DBMS layer is quite
intensive. Therefore the active/active scenario can contribute to costs quite a bit. Keep checking this article to get the exact
costs
Active/Passive deployment
If you can't find an acceptable delta between the network latency within one zone and the latency of cross-zone
network traffic, you can deploy an architecture that has an active/passive character from the SAP application layer
point of view. You define an active zone, which is the zone where you deploy the complete application layer and
where you attempt to run both the active DBMS and the SAP Central Services instance. With such a configuration,
you need to make sure you don't have extreme run time variations, depending on whether a job runs in-zone with
the active DBMS instance or not, in business transactions and batch jobs.
The basic layout of the architecture looks like this:
The following considerations apply for this configuration:
Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure
proximity placement groups as documented in the article Azure Proximity Placement Groups for optimal
network latency with SAP applications.
When you use this architecture, you need to monitor the status closely and try to keep the active DBMS and
SAP Central Services instances in the same zone as your deployed application layer. In case of a failover of
SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the
zone with the SAP application layer deployed as quickly as possible.
For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use
the Standard SKU Azure Load Balancer. The Basic Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched
across zones. You don't need separate virtual networks for each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks. Unmanaged disks aren't
supported for zonal deployments.
Azure Premium Storage and Ultra SSD storage don't support any type of storage replication across zones.
The application (DBMS or SAP Central Services) must replicate important data.
The same is true for the shared sapmnt directory, which is a shared disk (Windows), a CIFS share
(Windows), or an NFS share (Linux). You need to use a technology that replicates these shared disks or
shares between the zones. These technologies are supported:
For Windows, a cluster solution that uses SIOS DataKeeper, as documented in Cluster an SAP ASCS/SCS
instance on a Windows failover cluster by using a cluster shared disk in Azure.
For SUSE Linux, an NFS share that's built as documented in High availability for NFS on Azure VMs on
SUSE Linux Enterprise Server.
Currently, the solution that uses Microsoft Scale-Out File Server, as documented in Prepare Azure
infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS
instances, is not supported across zones.
The third zone is used to host the SBD device in case you build a SUSE Linux Pacemaker cluster or
additional application instances.
You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start
application resources in case of a zone failure.
Azure Site Recovery is currently unable to replicate active VMs to dormant VMs between zones.
You should invest in automation that allows you, in case of a zone failure, to automatically start the SAP
application layer in the second zone.
NOTE
We recommend that you use a configuration like this only in certain circumstances. For example, you might use it when data
can't leave the Azure region for security or compliance reasons.
Next steps
Here are some next steps for deploying across Azure Availability Zones:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure
Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP
ASCS/SCS instances
Cluster an SAP ASCS/SCS instance on a Windows
failover cluster by using a cluster shared disk in
Azure
12/22/2020 • 8 minutes to read • Edit Online
Windows
Windows Server failover clustering is the foundation of a high-availability SAP ASCS/SCS installation and DBMS
in Windows.
A failover cluster is a group of 1+n-independent servers (nodes) that work together to increase the availability of
applications and services. If a node failure occurs, Windows Server failover clustering calculates the number of
failures that can occur and still maintain a healthy cluster to provide applications and services. You can choose
from different quorum modes to achieve failover clustering.
Prerequisites
Before you begin the tasks in this article, review the following article:
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
Windows Server failover clustering configuration in Azure without a shared disk
SAP ASCS/SCS HA with cluster shared disks
In Windows, an SAP ASCS/SCS instance contains SAP central services, the SAP message server, enqueue server
processes, and SAP global host files. SAP global host files store central files for the entire SAP system.
An SAP ASCS/SCS instance has the following components:
SAP central services:
Two processes, a message and enqueue server, and an <ASCS/SCS virtual host name>, which is used
to access these two processes.
File structure: S:\usr\sap\<SID>\ASCS/SCS<instance number>
SAP global host files:
File structure: S:\usr\sap\<SID>\SYS...
The sapmnt file share, which enables access to these global S:\usr\sap\<SID>\SYS... files by using
the following UNC path:
\\<ASCS/SCS virtual host name>\sapmnt\<SID>\SYS...
Processes, file structure, and global host sapmnt file share of an SAP ASCS/SCS instance
In a high-availability setting, you cluster SAP ASCS/SCS instances. We use clustered shared disks (drive S, in our
example), to place the SAP ASCS/SCS and SAP global host files.
TIP
You can find more information about Enqueue Replication Server 1 and 2 (ERS1 and ERS2) here:
Enqueue Replication Server in a Microsoft Failover Cluster
New Enqueue Replicator in Failover Cluster environments
IMPORTANT
When deploying SAP ASCS/SCS Windows Failover cluster with Azure shared disk, be aware that your deployment will be
operating with a single shared disk in one storage cluster. Your SAP ASCS/SCS instance would be impacted, in case of
issues with the storage cluster, where the Azure shared disk is deployed.
TIP
Review the SAP Netweaver on Azure planning guide and the Azure Storage guide for SAP workloads for important
considerations, when planning your SAP deployment.
Supported OS versions
Both Windows Server 2016 and 2019 are supported (use the latest data center images).
We strongly recommend using Windows Ser ver 2019 Datacenter , as:
Windows 2019 Failover Cluster Service is Azure aware
There is added integration and awareness of Azure Host Maintenance and improved experience by
monitoring for Azure schedule events.
It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a
dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on
Azure Internal Load Balancer.
Shared disks in Azure with SIOS DataKeeper
Another option for shared disk is to use third-party software SIOS DataKeeper Cluster Edition to create a
mirrored storage that simulates cluster shared storage. The SIOS solution provides real-time synchronous data
replication.
To create a shared disk resource for a cluster:
1. Attach an additional disk to each of the virtual machines in a Windows cluster configuration.
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes.
3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the additional disk attached
volume from the source virtual machine to the additional disk attached volume of the target virtual machine.
SIOS DataKeeper abstracts the source and target local volumes, and then presents them to Windows Server
failover clustering as one shared disk.
Get more information about SIOS DataKeeper.
NOTE
You don't need shared disks for high availability with some DBMS products, like SQL Server. SQL Server AlwaysOn
replicates DBMS data and log files from the local disk of one cluster node to the local disk of another cluster node. In this
case, the Windows cluster configuration doesn't need a shared disk.
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an SAP ASCS/SCS instance
Cluster an SAP ASCS/SCS instance on a Windows
failover cluster by using a file share in Azure
12/22/2020 • 7 minutes to read • Edit Online
Windows
Windows Server failover clustering is the foundation of a high-availability SAP ASCS/SCS installation and DBMS
in Windows.
A failover cluster is a group of 1+n independent servers (nodes) that work together to increase the availability of
applications and services. If a node failure occurs, Windows Server failover clustering calculates the number of
failures that can occur and still maintain a healthy cluster to provide applications and services. You can choose
from different quorum modes to achieve failover clustering.
Prerequisites
Before you begin the tasks that are described in this article, review this article:
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver
IMPORTANT
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).
NOTE
An SMB file share is an alternative to using cluster shared disks for clustering SAP ASCS/SCS instances.
Figure 3: SAP <SID> cluster role resources for using a file share
Scale-out file shares with Storage Spaces Direct in Azure as an
SAPMNT file share
You can use a scale-out file share to host and protect SAP global host files. A scale-out file share also offers a
highly available SAPMNT file share service.
Figure 4: A scale-out file share used to protect SAP global host files
IMPORTANT
Scale-out file shares are fully supported in the Microsoft Azure cloud, and in on-premises environments.
A scale-out file share offers a highly available and horizontally scalable SAPMNT file share.
Storage Spaces Direct is used as a shared disk for a scale-out file share. You can use Storage Spaces Direct to build
highly available and scalable storage using servers with local storage. Shared storage that is used for a scale-out
file share, like for SAP global host files, is not a single point of failure.
When choosing Storage Spaces Direct, consider these use cases:
The virtual machines used to build the Storage Spaces Direct cluster need to be deployed in an Azure
availability set.
For disaster recovery of a Storage Spaces Direct Cluster, you can use Azure Site Recovery Services.
It is not supported to stretch the Storage Space Direct Cluster across different Azure Availability Zones.
SAP prerequisites for scale -out file shares in Azure
To use a scale-out file share, your system must meet the following requirements:
At least two cluster nodes for a scale-out file share.
Each node must have at least two local disks.
For performance reason, you must use mirroring resiliency:
Two-way mirroring for a scale-out file share with two cluster nodes.
Three-way mirroring for a scale-out file share with three (or more) cluster nodes.
We recommend three (or more) cluster nodes for a scale-out file share, with three-way mirroring. This setup
offers more scalability and more storage resiliency than the scale-out file share setup with two cluster nodes
and two-way mirroring.
You must use Azure Premium disks.
We recommend that you use Azure Managed Disks.
We recommend that you format volumes by using Resilient File System (ReFS).
For more information, see SAP Note 1869038 - SAP support for ReFs filesystem and the Choosing the
file system chapter of the article Planning volumes in Storage Spaces Direct.
Be sure that you install Microsoft KB4025334 cumulative update.
You can use DS-Series or DSv2-Series Azure VM sizes.
For good network performance between VMs, which is needed for Storage Spaces Direct disk sync, use a VM
type that has at least a “high” network bandwidth. For more information, see the DSv2-Series and DS-Series
specifications.
We recommend that you reserve some unallocated capacity in the storage pool. Leaving some unallocated
capacity in the storage pool gives volumes space to repair "in place" if a drive fails. This improves data safety
and performance. For more information, see Choosing volume size.
You don't need to configure the Azure internal load balancer for the scale-out file share network name, such as
for <SAP global host>. This is done for the <ASCS/SCS virtual host name> of the SAP ASCS/SCS instance or
for the DBMS. A scale-out file share scales out the load across all cluster nodes. <SAP global host> uses the
local IP address for all cluster nodes.
IMPORTANT
You cannot rename the SAPMNT file share, which points to <SAP global host>. SAP supports only the share name
"sapmnt."
For more information, see SAP Note 2492395 - Can the share name sapmnt be changed?
Configure SAP ASCS/SCS instances and a scale -out file share in two clusters
You can deploy SAP ASCS/SCS instances in one cluster, with their own SAP <SID> cluster role. In this case, you
configure the scale-out file share on another cluster, with another cluster role.
IMPORTANT
In this scenario, the SAP ASCS/SCS instance is configured to access the SAP global host by using UNC path \\<SAP global
host>\sapmnt\<SID>\SYS.
Figure 5: An SAP ASCS/SCS instance and a scale-out file share deployed in two clusters
IMPORTANT
In the Azure cloud, each cluster that is used for SAP and scale-out file shares must be deployed in its own Azure availability
set or across Azure Availability Zones. This ensures distributed placement of the cluster VMs across the underlying Azure
infrastructure. Availability Zone deployments are supported with this technology.
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and file share for an SAP
ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and file share for an SAP ASCS/SCS instance
Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in Azure
Storage Spaces Direct in Windows Server 2016
Deep dive: Volumes in Storage Spaces Direct
High availability for SAP NetWeaver on Azure VMs
on Windows with Azure NetApp Files(SMB) for SAP
applications
12/22/2020 • 7 minutes to read • Edit Online
This article describes how to deploy, configure the virtual machines, install the cluster framework, and install a
highly available SAP NetWeaver 7.50 system on Windows VMs, using SMB on Azure NetApp Files.
The database layer isn't covered in detail in this article. We assume that the Azure virtual network has already been
created.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which contains:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension for
SAP.
SAP Note 2287140 lists prerequisites for SAP-supported CA feature of SMB 3.x protocol.
SAP Note 2802770 has troubleshooting information for the slow running SAP transaction AL11 on Windows
2012 and 2016.
SAP Note 1911507 has information about transparent failover feature for a file share on Windows Server with
the SMB 3.0 protocol.
SAP Note 662452 has recommendation(deactivating 8.3 name generation) to address Poor file system
performance/errors during data accesses.
Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances
on Azure
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver
Add probe port in ASCS cluster configuration
Installation of an (A)SCS Instance on a Failover Cluster
Create an SMB volume for Azure NetApp Files
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
SAP developed a new approach, and an alternative to cluster shared disks, for clustering an SAP ASCS/SCS instance
on a Windows failover cluster. Instead of using cluster shared disks, one can use an SMB file share to deploy SAP
global host files. Azure NetApp Files supports SMBv3 (along with NFS) with NTFS ACL using Active Directory. Azure
NetApp Files is automatically highly available (as it is a PaaS service). These features make Azure NetApp Files great
option for hosting the SMB file share for SAP global.
Both Azure Active Directory (AD) Domain Services and Active Directory Domain Services (AD DS) are supported.
You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can be in
Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. In this article, we will use Domain
controller in an Azure VM.
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Windows, so
far it was necessary to build either SOFS cluster or use cluster shared disk s/w like SIOS. Now it is possible to
achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files for
the shared storage eliminates the need for either SOFS or SIOS.
NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).
IMPORTANT
You need to create Active Directory connections before creating an SMB volume. Review the requirements for Active
Directory connections.
TIP
You can find the instructions on how to mount the Azure NetApp Files volume, if you navigate in Azure Portal to the Azure
NetApp Files object, click on the Volumes blade, then Mount Instructions .
Prepare the infrastructure for SAP HA by using a Windows failover
cluster
1. Set the ASCS/SCS load balancing rules for the Azure internal load balancer.
2. Add Windows virtual machines to the domain.
3. Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
4. Set up a Windows Server failover cluster for an SAP ASCS/SCS instance
5. If you are using Windows Server 2016, we recommend that you configure Azure Cloud Witness.
NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).
IMPORTANT
If Pre-requisite checker Results in SWPM shows Continuous availability feature condition not met, it can be addressed
by following the instructions in Delayed error message when you try to access a shared folder that no longer exists in
Windows.
TIP
If Pre-requisite checker Results in SWPM shows Swap Size condition not met, you can adjust the SWAP size by
navigating to My Computer>System Properties>Performance Settings> Advanced> Virtual memory> Change.
4. Configure an SAP cluster resource, the SAP-SID-IP probe port, by using PowerShell. Execute this
configuration on one of the SAP ASCS/SCS cluster nodes, as described in Configure probe port.
Install an ASCS/SCS instance on the second ASCS/SCS cluster node
1. Install an SAP ASCS/SCS instance on the second cluster node. Start the SAP SWPM installation tool, then
navigate to Product > DBMS > Installation > Application Server ABAP (or Java) > High-Availability System >
ASCS/SCS instance > Additional cluster node.
Install a DBMS instance and SAP application servers
Complete your SAP installation, by installing:
A DBMS instance
A primary SAP application server
An additional SAP application server
2. Restart cluster node A. The SAP cluster resources will move to cluster node B.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability and disaster recovery on
Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP applications
12/22/2020 • 34 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations, installation
commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is used. The names of
the resources (for example virtual machines, virtual networks) in the example assume that you have used the
converged template with SAP system ID NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA and
SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much
more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use
virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS load
balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you can
use to deploy new virtual machines. The marketplace image contains the resource agent for SAP NetWeaver.
You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys the
virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS Multi SID template or the converged template on the Azure portal. The ASCS/SCS template
only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only) instances whereas
the converged template also creates the load-balancing rules for a database (for example Microsoft SQL Server
or SAP HANA). If you plan to install an SAP NetWeaver based system and you also want to install the database
on the same machines, use the converged template.
2. Enter the following parameters
a. Resource Prefix (ASCS/SCS Multi SID template only)
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Sap System ID (converged template only)
Enter the SAP system ID of the SAP system you want to install. The ID is used as a prefix for the resources
that are deployed.
c. Stack Type
Select the SAP NetWeaver stack type
d. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
e. Db Type
Select HANA
f. Sap System Size.
The amount of SAPS the new system provides. If you are not sure how many SAPS the system requires,
ask your SAP Technology Partner or System Integrator
g. System Availability
Select HA
h. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
i. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be
assigned to, name the ID of that specific subnet. The ID usually looks like /subscriptions/<subscription
ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network name> /subnets/<subnet
name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this NFS cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-backend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual Machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP for the
ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using cluster
nodes with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .
If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE
download page
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the
floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend
using azure-lb resource agent, which is part of package resource-agents, with the following package version
requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-
Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group of
the ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.
If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of the
ERS02 folder and retry.
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r
sudo vi /etc/sysctl.conf
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
4. Configure autofs
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
sudo vi /etc/waagent.conf
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database for example nw1-db and 10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.
hdbuserstore List
KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: HN1
The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (HN1 in the
output above)!
su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@HN1 SAPABAP1 <password of ABAP schema>
# 15.08.2018 13:50:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: Toolchain Module
# HASAPInterfaceVersion: Toolchain Module (sap_suse_cluster_connector 3.0.1)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode:
# HANodes: nw1-cl-0, nw1-cl-1
# 15.08.2018 14:00:04
# HACheckConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
# SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
# SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application server
# SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from application
server
# SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts
detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with SPOOL
service detected
# SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with
active ABAP SPOOL service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with BATCH
service detected
# SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with
active ABAP BATCH service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with DIALOG
service detected
# SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances with
active ABAP DIALOG service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with UPDATE
service detected
# SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances with
active ABAP UPDATE service on multiple hosts detected
# SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (nw1-ascs_NW1_00), SAPInstance includes
is-ers patch
# SUCCESS, SAP CONFIGURATION, Enqueue replication (nw1-ascs_NW1_00), Enqueue replication enabled
# SUCCESS, SAP STATE, Enqueue replication state (nw1-ascs_NW1_00), Enqueue replication active
# 15.08.2018 14:04:08
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
3. Test HAFailoverToNode
Resource state before starting the test:
# run as root
# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
# Remove migration constraints
nw1-cl-0:~ # crm resource clear rsc_sap_NW1_ASCS00
#INFO: Removed migration constraints for rsc_sap_NW1_ASCS00
Run the following command as root on the node where the ASCS instance is running
If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online: [ nw1-cl-1 ]
OFFLINE: [ nw1-cl-0 ]
Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-1 'not running' (7): call=219, status=complete,
exitreason='none',
last-rc-change='Wed Aug 15 14:38:38 2018', queued=0ms, exec=0ms
Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean the
failed resources.
# run as root
# list the SBD device(s)
nw1-cl-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"
Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS instance
and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If
using enqueue server 2 architecture, the enqueue will be retained.
The enqueue lock of transaction su01 should be lost and the back-end should have been reset. Resource
state after the test:
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough, Pacemaker
will eventually move the ASCS instance to the other node. Run the following commands as root to clean up
the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.
nw1-cl-0:~ # pgrep en.sapNW1 | xargs kill -9
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart will
not restart the process and the resource will be in a stopped state. Run the following commands as root to
clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state after
the test:
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server with Azure NetApp
Files for SAP applications
12/22/2020 • 40 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example
configurations, installation commands etc., the ASCS instance is number 00, the ERS instance number 01, the
Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
This article explains how to achieve high availability for SAP NetWeaver application with Azure NetApp Files. The
database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension for
SAP.
SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP
Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA and SAP
HANA System Replication on-premises. Use these guides as a general baseline. They provide much more
detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on SUSE Linux so
far it was necessary to build separate highly available NFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using
Azure NetApp Files for the shared storage eliminates the need for additional NFS cluster. Pacemaker is still needed
for HA of the SAP Netweaver central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS load
balancer.
(A )SCS
Frontend configuration
IP address 10.1.1.20
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.1.1.21
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .
2. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.1.0.4:/sapmnt/qas /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal .
Deploy Azure Load Balancer manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load balancer
and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example 10.1.1.21
and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. Create a backend pool for the ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example 10.1.1.21
and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and
TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and TCP for
the ASCS ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see
Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration
is performed to allow routing to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability
scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps
will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load
Balancer health probes.
NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using cluster
nodes with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .
If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE
download page
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo vi /etc/auto.master
# Add the following line to the file, save and exit
/- /etc/auto.direct
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASsys
NOTE
Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes. If the
Azure NetApp Files volumes are created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the Azure
NetApp Files volumes are created as NFSv4.1 volumes, follow the instructions to disable ID mapping and make sure to
use the corresponding NFSv4.1 configuration. In this example the Azure NetApp Files volumes were created as NFSv3
volumes.
sudo vi /etc/waagent.conf
# If using NFSv4.1
sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group of the
ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
# If using NFSv4.1
sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.
If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of the
ERS01 folder and retry.
sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.1.1.21 anftstsapers
# IP address of all application servers
10.1.1.15 anftstsapa01
10.1.1.16 anftstsapa02
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASpas
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASpas
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASaas
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASaas
sudo vi /etc/waagent.conf
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.
hdbuserstore List
KEY DEFAULT
ENV : 10.1.1.5:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in the
output above)!
su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>
# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
3. Test HAFailoverToNode
Resource state before starting the test:
# run as root
# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
# Remove migration constraints
anftstsapcl1:~ # crm resource clear rsc_sap_QAS_ASCS00
#INFO: Removed migration constraints for rsc_sap_QAS_ASCS00
Run the following command as root on the node where the ASCS instance is running
If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online:
Online: [ anftstsapcl1 ]
OFFLINE: [ anftstsapcl2 ]
Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=166, status=complete,
exitreason='',
last-rc-change='Fri Mar 8 18:26:10 2019', queued=0ms, exec=0ms
Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean the
failed resources.
# run as root
# list the SBD device(s)
anftstsapcl2:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405b730e31e7d5a4516a2a697dcf;/dev/disk/by-id/scsi-
36001405f69d7ed91ef54461a442c676e;/dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a"
Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS instance
and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If
using enqueue server 2 architecture, the enqueue will be retained.
The enqueue lock of transaction su01 should be lost, if using enqueue server replication 1 architecture and
the back-end should have been reset. Resource state after the test:
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root
to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as root
to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state after
the test:
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux
12/22/2020 • 28 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations, installation
commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is used. The names of
the resources (for example virtual machines, virtual networks) in the example assume that you have used the
ASCS/SCS template with Resource Prefix NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
Product Documentation for Red Hat Gluster Storage
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft
Azure
Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster
and can be used by multiple SAP systems.
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS load
balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Setting up GlusterFS
SAP NetWeaver requires shared storage for the transport and profile directory. Read GlusterFS on Azure VMs on
Red Hat Enterprise Linux for SAP NetWeaver on how to set up GlusterFS for SAP NetWeaver.
Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual
machines. You can use one of the quickstart templates on GitHub to deploy all required resources. The template
deploys the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS template on the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Stack Type
Select the SAP NetWeaver stack type
c. Os Type
Select one of the Linux distributions. For this example, select RHEL 7
d. Db Type
Select HANA
e. Sap System Count
The number of SAP system that run in this cluster. Select 1.
f. System Availability
Select HA
g. Admin Username, Admin Password or SSH key
A new user is created that can be used to sign in to the machine.
h. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be
assigned to, name the ID of that specific subnet. The ID usually looks like /subscriptions/<subscription
ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network name> /subnets/<subnet
name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP for the
ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo mount -a
sudo vi /etc/waagent.conf
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo pcs status
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group of
the ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo pcs status
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of the
ERS02 folder and retry.
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers
sudo vi /usr/sap/sapservices
# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ASCS00/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_nw1-ascs -D -u nw1adm
# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ERS02/exe/sapstartsrv pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or
newer and define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
1. [A] Add firewall rules for ASCS and ERS on both nodes
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/waagent.conf
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database for example nw1-db and 10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.
hdbuserstore List
KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: NW1
The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (HN1 in the
output above)!
su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@NW1 SAPABAP1 <password of ABAP schema>
# Remove failed actions for the ERS that occurred as part of the migration
[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02
Run the following command as root on the node where the ASCS instance is running
The status after the node is started again should look like this.
Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-0 'not running' (7): call=45, status=complete,
exitreason='',
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root
to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@nw1-cl-1 ~]# pgrep er.sapNW1 | xargs kill -9
If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as root
to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
12/22/2020 • 32 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example
configurations, installation commands etc. ASCS instance is number 00, the ERS instance is number 01, Primary
Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft
Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Red Hat Linux
so far it was necessary to build separate highly available GlusterFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using
Azure NetApp Files for the shared storage eliminates the need for additional GlusterFS cluster. Pacemaker is still
needed for HA of the SAP Netweaver central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the load balancer with
separate front-end IPs for (A)SCS and ERS.
(A )SCS
Frontend configuration
IP address 192.168.14.9
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 192.168.14.10
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal.
Deploy Linux manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load balancer
and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 192.168.14.9 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 192.168.14.9 )
d. Click OK
b. IP address 192.168.14.10 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
192.168.14.10 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select Load-balancing rules, and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 192.168.14.9 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 192.168.14.9 )
d. Click OK
b. IP address 192.168.14.10 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
192.168.14.10 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules, and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and
TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and TCP for
the ASCS ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see
Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration
is performed to allow routing to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability
scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps
will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load
Balancer health probes.
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 192.168.24.5:/sapQAS
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more details on how to change nfs4_disable_idmapping parameter see
https://access.redhat.com/solutions/1749883.
Create Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster
for this (A)SCS server.
Prepare for SAP NetWeaver installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo vi /etc/fstab
If using NFSv4.1:
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/waagent.conf
# If using NFSv4.1
sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_ASCS
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group of the
ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
sudo pcs node unstandby anftstsapcl2
sudo pcs node standby anftstsapcl1
# If using NFSv3
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS
# If using NFSv4.1
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of the
ERS01 folder and retry.
sudo chown qaadm /usr/sap/QAS/ERS01
sudo chgrp sapsys /usr/sap/QAS/ERS01
sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
sudo vi /usr/sap/sapservices
# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ASCS00/exe/sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm
# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm
8. [1] Create the SAP cluster resources
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2
support. If using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-
12.el7.x86_64 or newer and define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
9. [A] Add firewall rules for ASCS and ERS on both nodes Add the firewall rules for ASCS and ERS on both
nodes.
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
192.168.14.10 anftstsapers
192.168.14.7 anftstsapa01
192.168.14.8 anftstsapa02
sudo vi /etc/fstab
If using NFSv4.1:
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=3
# Mount
sudo mount -a
If using NFSv4.1:
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
# Mount
sudo mount -a
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=3
# Mount
sudo mount -a
If using NFSv4.1:
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
# Mount
sudo mount -a
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.
KEY DEFAULT
ENV : 192.168.14.4:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in the
output above)!
su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>
# Remove failed actions for the ERS that occurred as part of the migration
[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
Run the following command as root on the node where the ASCS instance is running
The status after the node is started again should look like this.
Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete,
exitreason='',
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root
to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@anftstsapcl2 ~]# pgrep er.sapQAS | xargs kill -9
If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as root
to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Prepare the Azure infrastructure for SAP HA by
using a Windows failover cluster and shared disk for
SAP ASCS/SCS
12/22/2020 • 11 minutes to read • Edit Online
Windows
This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a high-
availability SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk as an option for
clustering an SAP ASCS instance. Two alternatives for cluster shared disk are presented in the documentation:
Azure shared disks
Using SIOS DataKeeper Cluster Edition to create mirrored storage, that will simulate clustered shared disk
The presented configuration is relying on Azure proximity placement groups (PPG) to achieve optimal network
latency for SAP workloads. The documentation doesn't cover the database layer.
NOTE
Azure proximity placement groups are prerequisite for using Azure Shared Disk.
Prerequisites
Before you begin the installation, review this article:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared
disk
P RO XIM IT Y
H O ST N A M E RO L E H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET P L A C EM EN T GRO UP
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
The following list shows the configuration of the (A)SCS/ERS load balancer. The configuration for both SAP ASCS
and ERS2 in performed in the same Azure load balancer.
(A)SCS
Frontend configuration
Static ASCS/SCS IP address 10.0.0.43
Backend configuration
Add all virtual machines that should be part of the (A)SCS/ERS cluster. In this example VMs pr1-ascs-10 and
pr1-ascs-11 .
Probe Port
Port 620nr Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
Load-balancing rules
If using Standard Load Balancer, select HA ports
If using Basic Load Balancer, create Load balancing rules for the following ports
32nr TCP
36nr TCP
39nr TCP
81nr TCP
5nr 13 TCP
5nr 14 TCP
5nr 16 TCP
Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server
return) is Enabled.
ERS2
As Enqueue Replication Server 2 (ERS2) is also clustered, ERS2 virtual IP address must be also configured on
Azure ILB in addition to above SAP ASCS/SCS IP. This section only applies, if using Enqueue replication server 2
architecture.
2nd Frontend configuration
Static SAP ERS2 IP address 10.0.0.44
Backend configuration
The VMs were already added to the ILB backend pool.
2nd Probe Port
Port 621nr
Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
2nd Load-balancing rules
If using Standard Load Balancer, select HA ports
If using Basic Load Balancer, create Load balancing rules for the following ports
32nr TCP
33nr TCP
5nr 13 TCP
5nr 14 TCP
5nr 16 TCP
Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server
return) is Enabled.
TIP
With the Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure Shared Disk, you can
automate the infrastructure preparation, using Azure Shared Disk for one SAP SID with ERS1.
The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and attach to the VMs.
Azure Internal Load Balancer will be created and configured as well. For details - see the ARM template.
Once the feature installation has completed, reboot both cluster nodes.
Test and configure Windows failover cluster
On Windows 2019, the cluster will automatically recognize that it is running in Azure, and as a default option for
cluster management IP, it will use Distributed Network name. Therefore, it will use any of the cluster nodes local IP
addresses. As a result, there is no need for a dedicated (virtual) network name for the cluster, and there is no need
to configure this IP address on Azure Internal Load Balancer.
For more information see, Windows Server 2019 Failover Clustering New features Run this command on one of
the cluster nodes:
# IP adress for cluster network name is needed ONLY on Windows Server 2016 cluster
$ClusterStaticIPAddress = "10.0.0.42"
# Test cluster
Test-Cluster –Node $ClusterNodes -Verbose
$ComputerInfo = Get-ComputerInfo
$WindowsVersion = $ComputerInfo.WindowsProductName
$ResourceGroupName = "MyResourceGroup"
$location = "MyAzureRegion"
$SAPSID = "PR1"
$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"
# With parameter '-MaxSharesCount', we define the maximum number of cluster nodes to attach the shared disk
$NumberOfWindowsClusterNodes = 2
##################################
## Attach the disk to cluster VMs
##################################
# ASCS Cluster VM1
$ASCSClusterVM1 = "$SAPSID-ascs-10"
SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share
disk
This section is only applicable, if you are using the third-party software SIOS DataKeeper Cluster Edition to create
a mirrored storage that simulates cluster shared disk.
Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP ASCS/SCS
instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster Edition is a third-
party solution that you can use to create shared disk resources.
Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:
Add Microsoft .NET Framework, if needed. See the [SIOS documentation]
((https://us.sios.com/products/datakeeper-cluster/) for the most up-to-date .NET framework requirements
Install SIOS DataKeeper
Configure SIOS DataKeeper
Install SIOS DataKeeper
Install SIOS DataKeeper Cluster Edition on each node in the cluster. To create virtual shared storage with SIOS
DataKeeper, create a synced mirror and then simulate cluster shared storage.
Before you install the SIOS software, create the DataKeeperSvc domain user.
NOTE
Add the DataKeeperSvc domain user to the Local Administrator group on both cluster nodes.
Enter the domain user name and password for the SIOS DataKeeper installation
5. Install the license key for your SIOS DataKeeper instance, as shown in Figure 35.
Enter your SIOS DataKeeper license key
6. When prompted, restart the virtual machine.
Configure SIOS DataKeeper
After you install SIOS DataKeeper on both nodes, start the configuration. The goal of the configuration is to have
synchronous data replication between the additional disks that are attached to each of the virtual machines.
1. Start the DataKeeper Management and Configuration tool, and then select Connect Ser ver .
Define the base data for the node, which should be the current source node
5. Define the name, TCP/IP address, and disk volume of the target node.
Define the name, TCP/IP address, and disk volume of the current target node
6. Define the compression algorithms. In our example, we recommend that you compress the replication
stream. Especially in resynchronization situations, the compression of the replication stream dramatically
reduces resynchronization time. Compression uses the CPU and RAM resources of a virtual machine. As
the compression rate increases, so does the volume of CPU resources that are used. You can adjust this
setting later.
7. Another setting you need to check is whether the replication occurs asynchronously or synchronously.
When you protect SAP ASCS/SCS configurations, you must use synchronous replication.
DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active
Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 45:
Next steps
Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance
Prepare Azure infrastructure for SAP high availability
by using a Windows failover cluster and file share for
SAP ASCS/SCS instances
12/22/2020 • 3 minutes to read • Edit Online
This article describes the Azure infrastructure preparation steps that are needed to install and configure high-
availability SAP systems on a Windows Server Failover Clustering cluster (WSFC), using scale-out file share as an
option for clustering SAP ASCS/SCS instances.
Prerequisite
Before you start the installation, review the following article:
Architecture guide: Cluster SAP ASCS/SCS instances on a Windows failover cluster by using file share
SA P <SID> SA P A SC S/ SC S IN STA N C E N UM B ER
PR1 00
SAP global host name sapglobal Use IPs of all cluster nodes n/a
# Test cluster
Test-Cluster -node $nodes -Verbose
# Install cluster
$ClusterNetworkName = "sofs-cl"
$ClusterIP = "10.0.6.13"
New-Cluster -Name $ClusterNetworkName -Node $nodes –NoStorage –StaticAddress $ClusterIP -Verbose
IMPORTANT
We recommend that you have three or more cluster nodes for Scale-Out File Server with three-way mirroring.
In the Scale-Out File Server Resource Manager template UI, you must specify the VM count.
Figure 1 : UI screen for Scale-Out File Server Resource Manager template with managed disks
In the template, do the following:
1. In the Vm Count box, enter a minimum count of 2 .
2. In the Vm Disk Count box, enter a minimum disk count of 3 (2 disks + 1 spare disk = 3 disks).
3. In the Sofs Name box, enter the SAP global host network name, sapglobalhost .
4. In the Share Name box, enter the file share name, sapmnt .
Use unmanaged disks
The Azure Resource Manager template for deploying Scale-Out File Server with Storage Spaces Direct and Azure
Unmanaged Disks is available on GitHub.
Figure 2 : UI screen for the Scale-Out File Server Azure Resource Manager template without managed disks
In the Storage Account Type box, select Premium Storage . All other settings are the same as the settings for
managed disks.
Next steps
Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS
instances
High availability for NFS on Azure VMs on SUSE
Linux Enterprise Server
12/22/2020 • 14 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available NFS server that can be used to store the shared data of a highly
available SAP system. This guide describes how to set up a highly available NFS server that is used by two SAP
systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the
example assume that you have used the SAP file server template with resource prefix prod .
NOTE
This article contains references to the terms slave and master, terms that Microsoft no longer uses. When the terms are
removed from the software, we’ll remove them from this article.
The NFS server uses a dedicated virtual hostname and virtual IP addresses for every SAP system that uses this
NFS server. On Azure, a load balancer is required to use a virtual IP address. The following list shows the
configuration of the load balancer.
Frontend configuration
IP address 10.0.0.4 for NW1
IP address 10.0.0.5 for NW2
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
Probe Port
Port 61000 for NW1
Port 61001 for NW2
Load balancing rules (if using basic load balancer)
2049 TCP for NW1
2049 UDP for NW1
2049 TCP for NW2
2049 UDP for NW2
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo ls /dev/disk/azure/scsi1/
Example output
lun0 lun1
ls /dev/disk/azure/scsi1/lun*-part*
Example output
/dev/disk/azure/scsi1/lun0-part1 /dev/disk/azure/scsi1/lun1-part1
sudo vi /etc/drbd.conf
Make sure that the drbd.conf file contains the following two lines
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
sudo vi /etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-
emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
}
startup {
wfc-timeout 0;
}
options {
}
disk {
md-flushes yes;
disk-flushes yes;
c-plan-ahead 1;
c-min-rate 100M;
c-fill-target 20M;
c-max-rate 4G;
}
net {
after-sb-0pri discard-younger-primary;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
protocol C;
tcp-cork yes;
max-buffers 20000;
max-epoch-size 20000;
sndbuf-size 0;
rcvbuf-size 0;
}
}
7. [A] Create the NFS drbd devices
sudo vi /etc/drbd.d/NW1-nfs.res
Insert the configuration for the new drbd device and exit
resource NW1-nfs {
protocol C;
disk {
on-io-error detach;
}
on prod-nfs-0 {
address 10.0.0.6:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
}
sudo vi /etc/drbd.d/NW2-nfs.res
Insert the configuration for the new drbd device and exit
resource NW2-nfs {
protocol C;
disk {
on-io-error detach;
}
on prod-nfs-0 {
address 10.0.0.6:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
}
10. [1] Wait until the new drbd devices are synchronized
The crossmnt option in the exportfs cluster resources is present in our documentation for backward
compatibility with older SLES versions.
3. [1] Disable maintenance mode
Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
GlusterFS on Azure VMs on Red Hat Enterprise
Linux for SAP NetWeaver
12/22/2020 • 9 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, and install a GlusterFS
cluster that can be used to store the shared data of a highly available SAP system. This guide describes how to set
up GlusterFS that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual
machines, virtual networks) in the example assume that you have used the SAP file server template with resource
prefix glust .
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
Product Documentation for Red Hat Gluster Storage
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster
and can be used by multiple SAP systems.
Set up GlusterFS
You can either use an Azure Template from github to deploy all required Azure resources, including the virtual
machines, availability set and network interfaces or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual
machines. You can use one of the quickstart templates on github to deploy all required resources. The template
deploys the virtual machines, availability set etc. Follow these steps to deploy the template:
1. Open the SAP file server template in the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. SAP System Count Enter the number of SAP systems that will use this file server. This will deploy the
required number of disks etc.
c. Os Type
Select one of the Linux distributions. For this example, select RHEL 7
d. Admin Username, Admin Password or SSH key
A new user is created that can be used to log on to the machine.
e. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be
assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network name> /subnets/<subnet
name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pools. We recommend standard load balancer.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
6. Add one data disk for each SAP system to both virtual machines.
Configure GlusterFS
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1, [2] -
only applicable to node 2, [3] - only applicable to node 3.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
2. [A] Register
Register your virtual machines and attach it to a pool that contains repositories for RHEL 7 and GlusterFS
# Number of Peers: 2
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Accepted peer request (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Accepted peer request (Connected)
Use the following commands to create the GlusterFS volume for NW2 and start it.
sudo gluster vol create NW2-sapmnt replica 3 glust-0:/rhs/NW2/sapmnt glust-1:/rhs/NW2/sapmnt glust-
2:/rhs/NW2/sapmnt force
sudo gluster vol create NW2-trans replica 3 glust-0:/rhs/NW2/trans glust-1:/rhs/NW2/trans glust-
2:/rhs/NW2/trans force
sudo gluster vol create NW2-sys replica 3 glust-0:/rhs/NW2/sys glust-1:/rhs/NW2/sys glust-
2:/rhs/NW2/sys force
sudo gluster vol create NW2-ascs replica 3 glust-0:/rhs/NW2/ascs glust-1:/rhs/NW2/ascs glust-
2:/rhs/NW2/ascs force
sudo gluster vol create NW2-aers replica 3 glust-0:/rhs/NW2/aers glust-1:/rhs/NW2/aers glust-
2:/rhs/NW2/aers force
Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Setting up Pacemaker on SUSE Linux Enterprise
Server in Azure
12/22/2020 • 19 minutes to read • Edit Online
There are two options to set up a Pacemaker cluster in Azure. You can either use a fencing agent, which takes
care of restarting a failed node via the Azure APIs or you can use an SBD device.
The SBD device requires at least one additional virtual machine that acts as an iSCSI target server and
provides an SBD device. These iSCSI target servers can however be shared with other Pacemaker clusters. The
advantage of using an SBD device is, if you are already using SBD devices on-premises, doesn't require any
changes on how you operate the pacemaker cluster. You can use up to three SBD devices for a Pacemaker
cluster to allow an SBD device to become unavailable, for example during OS patching of the iSCSI target
server. If you want to use more than one SBD device per Pacemaker, make sure to deploy multiple iSCSI target
servers and connect one SBD from each iSCSI target server. We recommend using either one SBD device or
three. Pacemaker will not be able to automatically fence a cluster node if you only configure two SBD devices
and one of them is not available. If you want to be able to fence when one iSCSI target server is down, you
have to use three SBD devices and therefore three iSCSI target servers, which is the most resilient
configuration when using SBDs.
Azure Fence agent doesn't require deploying additional virtual machine(s).
IMPORTANT
When planning and deploying Linux Pacemaker clustered nodes and SBD devices, it is essential for the overall reliability
of the complete cluster configuration that the routing between the VMs involved and the VM(s) hosting the SBD
device(s) is not passing through any other devices like NVAs. Otherwise, issues and maintenance events with the NVA
can have a negative impact on the stability and reliability of the overall cluster configuration. In order to avoid such
obstacles, don't define routing rules of NVAs or User Defined Routing rules that route traffic between clustered nodes
and SBD devices through NVAs and similar devices when planning and deploying Linux Pacemaker clustered nodes and
SBD devices.
SBD fencing
Follow these steps if you want to use an SBD device for fencing.
Set up iSCSI target servers
You first need to create the iSCSI target virtual machines. iSCSI target servers can be shared with multiple
Pacemaker clusters.
1. Deploy new SLES 12 SP1 or higher virtual machines and connect to them via ssh. The machines don't need
to be large. A virtual machine size like Standard_E2s_v3 or Standard_D2s_v3 is sufficient. Make sure to use
Premium storage the OS disk.
Run the following commands on all iSCSI target vir tual machines .
1. Update SLES
NOTE
You might need to reboot the OS after you upgrade or update the OS.
2. Remove packages
To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following packages. You can ignore
errors about packages that cannot be found
# Create the SBD device for the ASCS server of SAP System NW1
sudo targetcli backstores/fileio create sbdascsnw1 /sbd/sbdascsnw1 50M write_back=false
sudo targetcli iscsi/ create iqn.2006-04.ascsnw1.local:ascsnw1
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/luns/ create /backstores/fileio/sbdascsnw1
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.nw1-xscs-0.local:nw1-
xscs-0
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.nw1-xscs-1.local:nw1-
xscs-1
# Create the SBD device for the database cluster of SAP System NW1
sudo targetcli backstores/fileio create sbddbnw1 /sbd/sbddbnw1 50M write_back=false
sudo targetcli iscsi/ create iqn.2006-04.dbnw1.local:dbnw1
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/luns/ create /backstores/fileio/sbddbnw1
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create iqn.2006-04.nw1-db-0.local:nw1-db-0
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create iqn.2006-04.nw1-db-1.local:nw1-db-1
sudo targetcli ls
o- /
..........................................................................................................
[...]
o- backstores
............................................................................................... [...]
| o- block ................................................................................... [Storage
Objects: 0]
| o- fileio .................................................................................. [Storage
Objects: 3]
| | o- sbdascsnw1 ................................................ [/sbd/sbdascsnw1 (50.0MiB) write-thru
activated]
| | | o- alua .................................................................................... [ALUA
Groups: 1]
| | | o- default_tg_pt_gp ........................................................ [ALUA state:
Active/optimized]
| | o- sbddbnw1 .................................................... [/sbd/sbddbnw1 (50.0MiB) write-thru
activated]
| | | o- alua .................................................................................... [ALUA
Groups: 1]
| | | o- default_tg_pt_gp ........................................................ [ALUA state:
Active/optimized]
| | o- sbdnfs ........................................................ [/sbd/sbdnfs (50.0MiB) write-thru
activated]
| | o- alua .................................................................................... [ALUA
Groups: 1]
| | o- default_tg_pt_gp ........................................................ [ALUA state:
Active/optimized]
| o- pscsi ................................................................................... [Storage
Objects: 0]
| o- ramdisk ................................................................................. [Storage
Objects: 0]
o- iscsi .............................................................................................
[Targets: 3]
| o- iqn.2006-04.ascsnw1.local:ascsnw1
.................................................................. [TPGs: 1]
| | o- tpg1 ................................................................................ [no-gen-
acls, no-auth]
| | o- acls
........................................................................................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0 ...............................................
[Mapped LUNs: 1]
| | | | o- mapped_lun0 ............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | | o- iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1 ...............................................
[Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | o- luns
........................................................................................... [LUNs: 1]
| | | o- lun0 .......................................... [fileio/sbdascsnw1 (/sbd/sbdascsnw1)
(default_tg_pt_gp)]
| | o- portals .....................................................................................
[Portals: 1]
| | o- 0.0.0.0:3260
...................................................................................... [OK]
| o- iqn.2006-04.dbnw1.local:dbnw1
...................................................................... [TPGs: 1]
| | o- tpg1 ................................................................................ [no-gen-
acls, no-auth]
| | o- acls
........................................................................................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-db-0.local:nw1-db-0 ...................................................
[Mapped LUNs: 1]
| | | | o- mapped_lun0 .............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | | o- iqn.2006-04.nw1-db-1.local:nw1-db-1 ...................................................
[Mapped LUNs: 1]
| | | o- mapped_lun0 .............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | o- luns
........................................................................................... [LUNs: 1]
| | | o- lun0 .............................................. [fileio/sbddbnw1 (/sbd/sbddbnw1)
(default_tg_pt_gp)]
| | o- portals .....................................................................................
[Portals: 1]
| | o- 0.0.0.0:3260
...................................................................................... [OK]
| o- iqn.2006-04.nfs.local:nfs
.......................................................................... [TPGs: 1]
| o- tpg1 ................................................................................ [no-gen-
acls, no-auth]
| o- acls
........................................................................................... [ACLs: 2]
| | o- iqn.2006-04.nfs-0.local:nfs-0 .........................................................
[Mapped LUNs: 1]
| | | o- mapped_lun0 ................................................................ [lun0
fileio/sbdnfs (rw)]
| | o- iqn.2006-04.nfs-1.local:nfs-1 .........................................................
[Mapped LUNs: 1]
| | o- mapped_lun0 ................................................................ [lun0
fileio/sbdnfs (rw)]
| o- luns
........................................................................................... [LUNs: 1]
| | o- lun0 .................................................. [fileio/sbdnfs (/sbd/sbdnfs)
(default_tg_pt_gp)]
| o- portals .....................................................................................
[Portals: 1]
| o- 0.0.0.0:3260
...................................................................................... [OK]
o- loopback ..........................................................................................
[Targets: 0]
o- vhost .............................................................................................
o- vhost .............................................................................................
[Targets: 0]
o- xen-pvscsi ........................................................................................
[Targets: 0]
sudo vi /etc/iscsi/initiatorname.iscsi
Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI
target server, for example for the NFS server.
InitiatorName=iqn.2006-04.nfs-0.local:nfs-0
sudo vi /etc/iscsi/initiatorname.iscsi
Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI
target server
InitiatorName=iqn.2006-04.nfs-1.local:nfs-1
Connect the iSCSI devices. In the example below, 10.0.0.17 is the IP address of the iSCSI target server
and 3260 is the default port. iqn.2006-04.nfs.local:nfs is one of the target names that is listed when
you run the first command below (iscsiadm -m discovery).
sudo iscsiadm -m discovery --type=st --portal=10.0.0.17:3260
sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.17:3260
sudo iscsiadm -m node -p 10.0.0.17:3260 -T iqn.2006-04.nfs.local:nfs --op=update --
name=node.startup --value=automatic
# If you want to use multiple SBD devices, also connect to the second iSCSI target server
sudo iscsiadm -m discovery --type=st --portal=10.0.0.18:3260
sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.18:3260
sudo iscsiadm -m node -p 10.0.0.18:3260 -T iqn.2006-04.nfs.local:nfs --op=update --
name=node.startup --value=automatic
# If you want to use multiple SBD devices, also connect to the third iSCSI target server
sudo iscsiadm -m discovery --type=st --portal=10.0.0.19:3260
sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.19:3260
sudo iscsiadm -m node -p 10.0.0.19:3260 -T iqn.2006-04.nfs.local:nfs --op=update --
name=node.startup --value=automatic
Make sure that the iSCSI devices are available and note down the device name (in the following
example /dev/sde)
lsscsi
The command list three device IDs for every SBD device. We recommend using the ID that starts with
scsi-3, in the example above this is
/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03
/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df
/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf
5. [1] Create the SBD device
Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node.
# Also create the second and third SBD devices if you want to use more than one.
sudo sbd -d /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df -1 60 -4 120 create
sudo sbd -d /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -1 60 -4 120 create
sudo vi /etc/sysconfig/sbd
Change the property of the SBD device, enable the pacemaker integration, and change the start mode
of SBD.
[...]
SBD_DEVICE="/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-
360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf"
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]
Cluster installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2]
- only applicable to node 2.
1. [A] Update SLES
NOTE
Check the version of package resource-agents and make sure the minimum version requirements are met:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Reduce the size of the dirty cache. For more information, see Low write performance on SLES 11/12
servers with large RAM.
sudo vi /etc/sysctl.conf
NOTE
Check the installed version of package cloud-netconfig-azure by running zypper info cloud-netconfig-
azure . If the version in your environment is 1.3 or higher, it is no longer necessary to suppress the
management of network interfaces by the cloud network plugin. If the version is lower than 1.3, we suggest to
update package cloud-netconfig-azure to the latest available version.
Change the configuration file for the network interface as shown below to prevent the cloud network
plugin from removing the virtual IP address (Pacemaker must control the VIP assignment). For more
information, see SUSE KB 7023633.
# Edit the configuration file
sudo vi /etc/sysconfig/network/ifcfg-eth0
# Change CLOUD_NETCONFIG_MANAGE
# CLOUD_NETCONFIG_MANAGE="yes"
CLOUD_NETCONFIG_MANAGE="no"
sudo ssh-keygen
# Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
# Enter passphrase (empty for no passphrase): -> Press ENTER
# Enter same passphrase again: -> Press ENTER
sudo ssh-keygen
# Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
# Enter passphrase (empty for no passphrase): -> Press ENTER
# Enter same passphrase again: -> Press ENTER
# insert the public key you copied in the last step into the authorized keys file on the second
server
sudo vi /root/.ssh/authorized_keys
# insert the public key you copied in the last step into the authorized keys file on the first
server
sudo vi /root/.ssh/authorized_keys
9. [A] Install Fence agents package, if using STONITH device, based on Azure Fence Agent.
IMPORTANT
The installed version of package fence-agents must be at least 4.4.0 to benefit from the faster failover times
with Azure Fence Agent, if a cluster nodes needs to be fenced. We recommend that you update the package, if
running a lower version.
# You may need to activate the Public cloud extention first. In this example the SUSEConnect
command is for SLES 15 SP1
SUSEConnect -p sle-module-public-cloud/15.1/x86_64
sudo zypper install python3-azure-mgmt-compute
IMPORTANT
Depending on your version and image type, you may need to activate the Public cloud extension for your OS
release, before you can install Azure Python SDK. You can check the extension, by running SUSEConnect ---list-
extensions.
To achieve the faster failover times with Azure Fence Agent:
on SLES 12 SP4 or SLES 12 SP5 install version 4.6.2 or higher of package python-azure-mgmt-compute
on SLES 15 install version 4.6.2 or higher of package python3 -azure-mgmt-compute
IMPORTANT
If using host names in the cluster configuration, it is vital to have reliable host name resolution. The cluster
communication will fail, if the names are not available and that can lead to cluster failover delays. The benefit of
using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too.
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment
sudo ha-cluster-init -u
sudo ha-cluster-join
sudo vi /etc/corosync/corosync.conf
Add the following bold content to the file if the values are not there or different. Make sure to change
the token to 30000 to allow Memory preserving maintenance. For more information, see this article for
Linux or Windows.
[...]
token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20
interface {
[...]
}
transport: udpu
}
nodelist {
node {
ring0_addr:10.0.0.6
}
node {
ring0_addr:10.0.0.7
}
}
logging {
[...]
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
{
"properties": {
"roleName": "Linux Fence Agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/c276fc76-9cd4-44c9-99a7-4fd71546436e",
"/subscriptions/e91d47c4-76f3-4271-a796-21b4ecfe3624"
],
"permissions": [
{
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}
]
}
}
IMPORTANT
The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation
and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring
operation.
TIP
Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions,
in Public endpoint connectivity for VMs using standard ILB.
NOTE
After you configure the Pacemaker resources for azure-events agent, when you place the cluster in or out of
maintenance mode, you may get warning messages like:
WARNING: cib-bootstrap-options: unknown attribute 'hostName_ hostname '
WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState'
WARNING: cib-bootstrap-options: unknown attribute 'hostName_ hostname '
These warning messages can be ignored.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Setting up Pacemaker on Red Hat Enterprise Linux
in Azure
12/22/2020 • 9 minutes to read • Edit Online
Cluster installation
NOTE
Red Hat doesn't support software-emulated watchdog. Red Hat doesn't support SBD on cloud platforms. For details see
Support Policies for RHEL High Availability Clusters - sbd and fence_sbd. The only supported fencing mechanism for
Pacemaker Red Hat Enterprise Linux clusters on Azure, is Azure fence agent.
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Register. This step is not required, if using RHEL SAP HA-enabled images.
Register your virtual machines and attach it to a pool that contains repositories for RHEL 7.
By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for
your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To
mitigate this, Azure now provides BYOS RHEL images. More information is available here.
2. [A] Enable RHEL for SAP repos. This step is not required, if using RHEL SAP HA-enabled images.
In order to install the required packages, enable the following repositories.
Check the version of the Azure fence agent. If necessary, update it to a version equal to or later than the
stated above.
IMPORTANT
If you need to update the Azure Fence agent, and if using custom role, make sure to update the custom role to
include action powerOff . For details see Create a custom role for the fence agent.
IMPORTANT
If using host names in the cluster configuration, it is vital to have reliable host name resolution. The cluster
communication will fail, if the names are not available and that can lead to cluster failover delays. The benefit of
using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too.
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
# Run the following command until the status of both nodes is online
sudo pcs status
# Cluster name: nw1-azr
# WARNING: no stonith devices and stonith-enabled is not false
# Stack: corosync
# Current DC: prod-cl1-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
# Last updated: Fri Aug 17 09:18:24 2018
# Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on prod-cl1-1
#
# 2 nodes configured
# 0 resources configured
#
# Online: [ prod-cl1-0 prod-cl1-1 ]
#
# No resources
#
# Daemon Status:
# corosync: active/disabled
# pacemaker: active/disabled
# pcsd: active/enabled
NOTE
Option 'pcmk_host_map' is ONLY required in the command, if the RHEL host names and the Azure node names are NOT
identical. Refer to the bold section in the command.
For RHEL 7.X , use the following command to configure the fence device:
sudo pcs stonith create rsc_st_azure fence_azure_arm login="login ID" passwd="password"
resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" pcmk_host_map="prod-
cl1-0:10.0.0.6;prod-cl1-1:10.0.0.7" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4
pcmk_action_limit=3 \
op monitor interval=3600
For RHEL 8.X , use the following command to configure the fence device:
IMPORTANT
The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation
and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring
operation.
TIP
Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions, in
Public endpoint connectivity for VMs using standard ILB.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-
availability scenarios
12/22/2020 • 10 minutes to read • Edit Online
The scope of this article is to describe configurations, that will enable outbound connectivity to public end
point(s). The configurations are mainly in the context of High Availability with Pacemaker for SUSE / RHEL.
If you are using Pacemaker with Azure fence agent in your high availability solution, then the VMs must have
outbound connectivity to the Azure management API. The article presents several options to enable you to
select the option that is best suited for your scenario.
Overview
When implementing high availability for SAP solutions via clustering, one of the necessary components is
Azure Load Balancer. Azure offers two load balancer SKUs: standard and basic.
Standard Azure load balancer offers some advantages over the Basic load balancer. For instance, it works
across Azure Availability zones, it has better monitoring and logging capabilities for easier troubleshooting,
reduced latency. The “HA ports” feature covers all ports, that is, it is no longer necessary to list all individual
ports.
There are some important differences between the basic and the standard SKU of Azure load balancer. One of
them is the handling of outbound traffic to public end point. For full Basic versus Standard SKU load balancer
comparison, see Load Balancer SKU comparison.
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there is no outbound connectivity to public end points, unless additional
configuration is done.
If a VM is assigned a public IP address, or the VM is in the backend pool of a load balancer with public IP
address, it will have outbound connectivity to public end points.
SAP systems often contain sensitive business data. It is rarely acceptable for VMs hosting SAP systems to be
accessible via public IP addresses. At the same time, there are scenarios, which would require outbound
connectivity from the VM to public end points.
Examples of scenarios, requiring access to Azure public end point are:
Azure Fence Agent requires access to management.azure.com and login.microsoftonline.com
Azure Backup
Azure Site Recovery
Using public repository for patching the Operating system
The SAP application data flow may require outbound connectivity to public end point
If your SAP deployment doesn’t require outbound connectivity to public end points, you don’t need to
implement the additional configuration. It is sufficient to create internal standard SKU Azure Load Balancer for
your high availability scenario, assuming that there is also no need for inbound connectivity from public end
points.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard
Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to
allow routing to public end points.
If the VMs have either public IP addresses or are already in the backend pool of Azure Load balancer with public IP
address, the VM will already have outbound connectivity to public end points.
TIP
Where possible, use Service tags to reduce the complexity of the Network Security Group .
Deployment steps
1. Create Load Balancer
a. In the Azure portal , click All resources, Add, then search for Load Balancer
b. Click Create
c. Load Balancer Name MyPublicILB
d. Select Public as a Type, Standard as SKU
e. Select Create Public IP address and specify as a name MyPublicILBFrondEndIP
f. Select Zone Redundant as Availability zone
g. Click Review and Create, then click Create
2. Create Backend pool MyBackendPoolOfPublicILB and add the VMs.
a. Select the Virtual network
b. Select the VMs and their IP addresses and add them to the backend pool
3. Create outbound rules. Currently it is not possible to create outbound rules from the Azure portal. You
can create outbound rules with Azure CLI.
4. Create Network Security group rules to restrict access to specific Public End Points. If there is existing
Network Security Group, you can adjust it. The example below shows how to enable access to the
Azure management API:
a. Navigate to the Network Security Group
b. Click Outbound Security Rules
c. Add a rule to Deny all outbound Access to Internet .
d. Add a rule to Allow access to AzureCloud , with priority lower than the priority of the rule to deny
all internet access.
The outbound security rules would look like:
For more information on Azure Network security groups, see Security Groups .
TIP
Where possible, use Service tags to reduce the complexity of the Azure Firewall rules.
Deployment steps
1. The deployment steps assume that you already have Virtual network and subnet defined for your VMs.
2. Create Subnet AzureFirewallSubnet in the same Virtual Network, where the VMS and the Standard
Load Balancer are deployed.
a. In Azure portal, Navigate to the Virtual Network: Click All Resources, Search for the Virtual
Network, Click on the Virtual Network, Select Subnets.
b. Click Add Subnet. Enter AzureFirewallSubnet as Name. Enter appropriate Address Range. Save.
3. Create Azure Firewall.
a. In Azure portal select All resources, click Add, Firewall, Create. Select Resource group (select the
same resource group, where the Virtual Network is).
b. Enter name for the Azure Firewall resource. For instance, MyAzureFirewall .
c. Select Region and select at least two Availability zones, aligned with the Availability zones where
your VMs are deployed.
d. Select your Virtual Network, where the SAP VMs and Azure Standard Load balancer are deployed.
e. Public IP Address: Click create and enter a name. For Instance MyFirewallPublicIP .
4. Create Azure Firewall Rule to allow outbound connectivity to specified public end points. The example
shows how to allow access to the Azure Management API public endpoint.
a. Select Rules, Network Rule Collection, then click Add network rule collection.
b. Name: MyOutboundRule , enter Priority, Select Action Allow .
c. Service: Name ToAzureAPI . Protocol: Select Any . Source Address: enter the range for your subnet,
where the VMs and Standard Load Balancer are deployed for instance: 11.97.0.0/24 . Destination
ports: enter * .
d. Save
e. As you are still positioned on the Azure Firewall, Select Overview. Note down the Private IP Address
of the Azure Firewall.
5. Create route to Azure Firewall
a. In Azure portal select All resources, then click Add, Route Table, Create.
b. Enter Name MyRouteTable, select Subscription, Resource group, and Location (matching the
location of your Virtual network and Firewall).
c. Save
The firewall rule would look like:
6. Create User Defined Route from the subnet of your VMs to the private IP of MyAzureFirewall .
a. As you are positioned on the Route Table, click Routes. Select Add.
b. Route name: ToMyAzureFirewall, Address prefix: 0.0.0.0/0 . Next hop type: Select Virtual Appliance.
Next hop address: enter the private IP address of the firewall you configured: 11.97.1.4 .
c. Save
sudo vi /etc/sysconfig/pacemaker
# Add the following lines
http_proxy=http://MyProxyService:MyProxyPort
https_proxy=http://MyProxyService:MyProxyPort
Red Hat
Other options
If outbound traffic is routed via third party, URL-based firewall proxy:
if using Azure fence agent make sure the firewall configuration allows outbound connectivity to the Azure
management API: https://management.azure.com and https://login.microsoftonline.com
if using SUSE's Azure public cloud update infrastructure for applying updates and patches, see Azure
Public Cloud Update Infrastructure 101
Next steps
Learn how to configure Pacemaker on SUSE in Azure
Learn how to configure Pacemaker on Red Hat in Azure
Install SAP NetWeaver HA on a Windows failover
cluster and shared disk for an SAP ASCS/SCS
instance in Azure
12/22/2020 • 9 minutes to read • Edit Online
This article describes how to install and configure a high-availability SAP system in Azure by using a Windows
Server failover cluster and cluster shared disk for clustering an SAP ASCS/SCS instance. As described in
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared
disk, there are two alternatives for cluster shared disk:
Azure shared disks
Using SIOS DataKeeper Cluster Edition to create mirrored storage, that will simulate clustered shared disk
Prerequisites
Before you begin the installation, review these documents:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster
shared disk
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
We don't describe the DBMS setup in this article because setups vary depending on the DBMS system you use.
We assume that high-availability concerns with the DBMS are addressed with the functionalities that different
DBMS vendors support for Azure. Examples are AlwaysOn or database mirroring for SQL Server and Oracle Data
Guard for Oracle databases. The high availability scenarios for the DBMS are not covered in this article.
There are no special considerations when different DBMS services interact with a clustered SAP ASCS or SCS
configuration in Azure.
NOTE
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and ABAP+Java systems are almost identical.
The most significant difference is that an SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance running in the same Microsoft failover
cluster group. Any installation differences for each SAP NetWeaver installation stack are explicitly mentioned. You can
assume that the rest of the steps are the same.
IMPORTANT
The IP address that you assign to the virtual host name of the ASCS/SCS instance must be the same as the IP
address that you assigned to Azure Load Balancer.
Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address
2. If are using the new SAP Enqueue Replication Server 2, which is also clustered instance, then you need to
reserve in DNS a virtual host name for ERS2 as well.
IMPORTANT
The IP address that you assign to the virtual host name of the ERS2 instance must be the second the IP address
that you assigned to Azure Load Balancer.
Define the DNS entry for the SAP ERS2 cluster virtual name and TCP/IP address
3. To define the IP address that's assigned to the virtual host name, select DNS Manager > Domain .
New virtual name and TCP/IP address for SAP ASCS/SCS cluster configuration
Install the SAP first cluster node
1. Execute the first cluster node option on cluster node A. Select:
ABAP system : ASCS instance number 00
Java system : SCS instance number 01
ABAP+Java system : ASCS instance number 00 and SCS instance number 01
IMPORTANT
Keep in mind that the configuration in the Azure internal load balancer load balancing rules(if using Basic SKU) and
the selected SAP instance numbers must match.
2. Follow the SAP described installation procedure. Make sure in the start installation option “First Cluster
Node”, to choose “Cluster Shared Disk” as configuration option.
TIP
The SAP installation documentation describes how to install the first ASCS/SCS cluster node.
enque/encni/set_so_keepalive = true
For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
2. To apply the SAP profile parameter changes, restart the SAP ASCS/SCS instance.
Add a probe port
Use the internal load balancer's probe functionality to make the entire cluster configuration work with Azure Load
Balancer. The Azure internal load balancer usually distributes the incoming workload equally between
participating virtual machines.
However, this won't work in some cluster configurations because only one instance is active. The other instance is
passive and can’t accept any of the workload. A probe functionality helps when the Azure internal load balancer
detect which instance is active, and only target the active instance.
IMPORTANT
In this example configuration, the ProbePor t is set to 620Nr . For SAP ASCS instance with number 00 it is 62000 . You will
need to adjust the configuration to match your SAP instance numbers and your SAP SID.
To add a probe port run this PowerShell Module on one of the cluster VMs:
In the case of SAP ASC/SCS Instance
If using ERS2, which is clustered. There is no need to configure probe port for ERS1, as it is not clustered.
function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {
<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.
.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.
It will also restart SAP Cluster group (default behavior), to activate the changes.
You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.
Expectation is that SAP group is installed with official SWPM installation tool, which will set default
expected naming convention for:
- SAP Cluster Group: 'SAP $SAPSID'
- SAP Cluster IP Address Resource: 'SAP $SAPSID IP'
.PARAMETER SAPSID
SAP SID - 3 characters staring with letter.
.PARAMETER ProbePort
Azure Load Balancer Health Check Probe Port.
.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is '$True', so SAP cluster group will be restarted to activate the
changes.
.PARAMETER IsSAPERSClusteredInstance
Optional parameter.Default value is '$False'.
If set to $True , then handle clsutered new SAP ERS2 instance.
.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP', and restart the SAP cluster group 'SAP
AB1', to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000
.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP'. SAP cluster group 'SAP AB1' IS NOT
restarted, therefore changes are NOT active.
# To activate the changes you need to manualy restart 'SAP AB1' cluster group.
# To activate the changes you need to manualy restart 'SAP AB1' cluster group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
RestartSAPClusterGroup $False
.EXAMPLE
# Set probe port to 62001, on SAP cluster resource 'SAP AB1 ERS IP'. SAP cluster group 'SAP AB1 ERS' IS
restarted, to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
IsSAPERSClusteredInstance $True
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,
[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,
[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
)
BEGIN{}
PROCESS{
try{
if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS Instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS Instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}
if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."
2. Restart cluster node A within the Windows guest operating system. This initiates an automatic failover of
the SAP <SID> cluster group from node A to node B.
3. Restart cluster node A from the Azure portal. This initiates an automatic failover of the SAP <SID> cluster
group from node A to node B.
4. Restart cluster node A by using Azure PowerShell. This initiates an automatic failover of the SAP <SID>
cluster group from node A to node B.
5. Verification
After failover, verify that the the SAP <SID> cluster group is running on cluster node B.
In Failover Cluster Manager, the SAP <SID> cluster group is running on cluster node B
After failover, verify shared disk is now mounted on cluster node B.
After failover, if using SIOS, verify that SIOS DataKeeper is replicating data from source volume drive
S on cluster node B to target volume drive S on cluster node A.
SIOS DataKeeper replicates the local volume from cluster node B to cluster node A
Install SAP NetWeaver high availability on a
Windows failover cluster and file share for SAP
ASCS/SCS instances on Azure
12/22/2020 • 4 minutes to read • Edit Online
This article describes how to install and configure a high-availability SAP system on Azure, with Windows Server
Failover Cluster (WSFC) and Scale-Out File Server as an option for clustering SAP ASCS/SCS instances.
Prerequisites
Before you start the installation, review the following articles:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share
Prepare Azure infrastructure SAP high availability by using a Windows failover cluster and file share for
SAP ASCS/SCS instances
High availability for SAP NetWeaver on Azure VMs
You need the following executables and DLLs from SAP:
SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or later.
SAP Kernel 7.49 or later
IMPORTANT
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).
We do not describe the Database Management System (DBMS) setup because setups vary depending on the
DBMS you use. However, we assume that high-availability concerns with the DBMS are addressed with the
functionalities that various DBMS vendors support for Azure. Such functionalities include AlwaysOn or database
mirroring for SQL Server, and Oracle Data Guard for Oracle databases. In the scenario we use in this article, we
didn't add more protection to the DBMS.
There are no special considerations when various DBMS services interact with this kind of clustered SAP
ASCS/SCS configuration in Azure.
NOTE
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and ABAP+Java systems are almost identical.
The most significant difference is that an SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance running in the same Microsoft failover
cluster group. Any installation differences for each SAP NetWeaver installation stack are explicitly mentioned. You can
assume that all other parts are the same.
To create SAPMNT and set folder and share security, execute the following PowerShell script on one of the SOFS
cluster nodes:
# Create SAPMNT on file share
$SAPSID = "PR1"
$DomainName = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName = "$DomainName\SAP_" + $SAPSID + "_GlobalAdmin"
$UsrSAPFolder = "C:\ClusterStorage\SAP$SAPSID\usr\sap\"
# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
Create a virtual host name for the clustered SAP ASCS/SCS instance
Create an SAP ASCS/SCS cluster network name (for example, pr1-ascs [10.0.6.7] ), as described in Create a
virtual host name for the clustered SAP ASCS/SCS instance.
PA RA M ET ER N A M E PA RA M ET ER VA L UE
gw/netstat_once 0
enque/encni/set_so_keepalive true
service/ha_check_node 1
Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks - Official SAP guidelines for high-
availability file share
Storage Spaces Direct in Windows Server 2016
Scale-Out File Server for application data overview
What's new in storage in Windows Server 2016
High availability for SAP NetWeaver on Azure VMs
on Windows with Azure NetApp Files(SMB) for SAP
applications
12/22/2020 • 7 minutes to read • Edit Online
This article describes how to deploy, configure the virtual machines, install the cluster framework, and install a
highly available SAP NetWeaver 7.50 system on Windows VMs, using SMB on Azure NetApp Files.
The database layer isn't covered in detail in this article. We assume that the Azure virtual network has already
been created.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which contains:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Note 2287140 lists prerequisites for SAP-supported CA feature of SMB 3.x protocol.
SAP Note 2802770 has troubleshooting information for the slow running SAP transaction AL11 on Windows
2012 and 2016.
SAP Note 1911507 has information about transparent failover feature for a file share on Windows Server with
the SMB 3.0 protocol.
SAP Note 662452 has recommendation(deactivating 8.3 name generation) to address Poor file system
performance/errors during data accesses.
Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS
instances on Azure
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver
Add probe port in ASCS cluster configuration
Installation of an (A)SCS Instance on a Failover Cluster
Create an SMB volume for Azure NetApp Files
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
SAP developed a new approach, and an alternative to cluster shared disks, for clustering an SAP ASCS/SCS
instance on a Windows failover cluster. Instead of using cluster shared disks, one can use an SMB file share to
deploy SAP global host files. Azure NetApp Files supports SMBv3 (along with NFS) with NTFS ACL using Active
Directory. Azure NetApp Files is automatically highly available (as it is a PaaS service). These features make Azure
NetApp Files great option for hosting the SMB file share for SAP global.
Both Azure Active Directory (AD) Domain Services and Active Directory Domain Services (AD DS) are supported.
You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can be in
Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. In this article, we will use Domain
controller in an Azure VM.
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Windows, so
far it was necessary to build either SOFS cluster or use cluster shared disk s/w like SIOS. Now it is possible to
achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files
for the shared storage eliminates the need for either SOFS or SIOS.
NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).
IMPORTANT
You need to create Active Directory connections before creating an SMB volume. Review the requirements for Active
Directory connections.
TIP
You can find the instructions on how to mount the Azure NetApp Files volume, if you navigate in Azure Portal to the Azure
NetApp Files object, click on the Volumes blade, then Mount Instructions .
Prepare the infrastructure for SAP HA by using a Windows failover
cluster
1. Set the ASCS/SCS load balancing rules for the Azure internal load balancer.
2. Add Windows virtual machines to the domain.
3. Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
4. Set up a Windows Server failover cluster for an SAP ASCS/SCS instance
5. If you are using Windows Server 2016, we recommend that you configure Azure Cloud Witness.
NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).
IMPORTANT
If Pre-requisite checker Results in SWPM shows Continuous availability feature condition not met, it can be
addressed by following the instructions in Delayed error message when you try to access a shared folder that no
longer exists in Windows.
TIP
If Pre-requisite checker Results in SWPM shows Swap Size condition not met, you can adjust the SWAP size by
navigating to My Computer>System Properties>Performance Settings> Advanced> Virtual memory> Change.
4. Configure an SAP cluster resource, the SAP-SID-IP probe port, by using PowerShell. Execute this
configuration on one of the SAP ASCS/SCS cluster nodes, as described in Configure probe port.
Install an ASCS/SCS instance on the second ASCS/SCS cluster node
1. Install an SAP ASCS/SCS instance on the second cluster node. Start the SAP SWPM installation tool, then
navigate to Product > DBMS > Installation > Application Server ABAP (or Java) > High-Availability System >
ASCS/SCS instance > Additional cluster node.
Install a DBMS instance and SAP application servers
Complete your SAP installation, by installing:
A DBMS instance
A primary SAP application server
An additional SAP application server
2. Restart cluster node A. The SAP cluster resources will move to cluster node B.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability and disaster recovery on
Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP
applications
12/22/2020 • 34 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations,
installation commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is
used. The names of the resources (for example virtual machines, virtual networks) in the example assume that
you have used the converged template with SAP system ID NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP
Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA
and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide
much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database
use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address.
We recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and
ERS load balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster
Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you
can use to deploy new virtual machines. The marketplace image contains the resource agent for SAP
NetWeaver.
You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys
the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS Multi SID template or the converged template on the Azure portal. The ASCS/SCS
template only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only)
instances whereas the converged template also creates the load-balancing rules for a database (for
example Microsoft SQL Server or SAP HANA). If you plan to install an SAP NetWeaver based system and
you also want to install the database on the same machines, use the converged template.
2. Enter the following parameters
a. Resource Prefix (ASCS/SCS Multi SID template only)
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Sap System ID (converged template only)
Enter the SAP system ID of the SAP system you want to install. The ID is used as a prefix for the
resources that are deployed.
c. Stack Type
Select the SAP NetWeaver stack type
d. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
e. Db Type
Select HANA
f. Sap System Size.
The amount of SAPS the new system provides. If you are not sure how many SAPS the system
requires, ask your SAP Technology Partner or System Integrator
g. System Availability
Select HA
h. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
i. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should
be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network
name> /subnets/<subnet name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this NFS cluster. Afterwards, you create a load balancer and
use the virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-backend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual Machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-
ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP
for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS
ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause
the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using
cluster nodes with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .
If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the
SUSE download page
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation
of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and
the floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we
recommend using azure-lb resource agent, which is part of package resource-agents, with the following package
version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure
Load-Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r
If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group
of the ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r
NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.
If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of
the ERS02 folder and retry.
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support.
If using enqueue server 2 architecture (ENSA2), define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r
sudo vi /etc/sysctl.conf
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment
# IP address of the load balancer frontend configuration for NFS
10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db
# IP address of all application servers
10.0.0.20 nw1-di-0
10.0.0.21 nw1-di-1
4. Configure autofs
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
sudo vi /etc/waagent.conf
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database for example nw1-db and
10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.
hdbuserstore List
KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: HN1
The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (HN1
in the output above)!
su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@HN1 SAPABAP1 <password of ABAP schema>
# 15.08.2018 13:50:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: Toolchain Module
# HASAPInterfaceVersion: Toolchain Module (sap_suse_cluster_connector 3.0.1)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode:
# HANodes: nw1-cl-0, nw1-cl-1
# 15.08.2018 14:00:04
# HACheckConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
# SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
# SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application
server
# SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from
application server
# SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts
detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with
SPOOL service detected
# SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with
active ABAP SPOOL service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with
BATCH service detected
# SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with
active ABAP BATCH service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with
DIALOG service detected
# SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances
with active ABAP DIALOG service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with
UPDATE service detected
# SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances
with active ABAP UPDATE service on multiple hosts detected
# SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (nw1-ascs_NW1_00), SAPInstance
includes is-ers patch
# SUCCESS, SAP CONFIGURATION, Enqueue replication (nw1-ascs_NW1_00), Enqueue replication enabled
# SUCCESS, SAP STATE, Enqueue replication state (nw1-ascs_NW1_00), Enqueue replication active
# 15.08.2018 14:04:08
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
3. Test HAFailoverToNode
Resource state before starting the test:
# run as root
# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
# Remove migration constraints
nw1-cl-0:~ # crm resource clear rsc_sap_NW1_ASCS00
#INFO: Removed migration constraints for rsc_sap_NW1_ASCS00
Run the following command as root on the node where the ASCS instance is running
If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online: [ nw1-cl-1 ]
OFFLINE: [ nw1-cl-0 ]
Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-1 'not running' (7): call=219, status=complete,
exitreason='none',
last-rc-change='Wed Aug 15 14:38:38 2018', queued=0ms, exec=0ms
Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean
the failed resources.
# run as root
# list the SBD device(s)
nw1-cl-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"
Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS
instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be
lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.
The enqueue lock of transaction su01 should be lost and the back-end should have been reset. Resource
state after the test:
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the
enqueue server.
nw1-cl-0:~ # pgrep en.sapNW1 | xargs kill -9
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail
over after the ASCS instance is started. Run the following commands as root to clean up the resource
state of the ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as
root to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state
after the test:
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure
VMs on SUSE Linux Enterprise Server with Azure
NetApp Files for SAP applications
12/22/2020 • 40 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the
example configurations, installation commands etc., the ASCS instance is number 00, the ERS instance
number 01, the Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System
ID QAS is used.
This article explains how to achieve high availability for SAP NetWeaver application with Azure NetApp Files.
The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required
SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA and
SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much
more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on SUSE
Linux so far it was necessary to build separate highly available NFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files.
Using Azure NetApp Files for the shared storage eliminates the need for additional NFS cluster. Pacemaker is
still needed for HA of the SAP Netweaver central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS
load balancer.
(A )SCS
Frontend configuration
IP address 10.1.1.20
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.1.1.21
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration
on Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on
the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for
files on Azure NetApp volumes that are mounted on the VMs will be displayed as nobody .
2. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create
the directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.1.0.4:/sapmnt/qas /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal .
Deploy Azure Load Balancer manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load
balancer and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
10.1.1.21 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example
62101 and health.QAS.ERS )
d. Load-balancing rules
a. Create a backend pool for the ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created
earlier (for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example
lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
10.1.1.21 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example
62101 and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created
earlier (for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16
and TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and
TCP for the ASCS ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details
see Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional
configuration is performed to allow routing to public end points. For details on how to achieve
outbound connectivity see Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP
timestamps will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For
details see Load Balancer health probes.
NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using
cluster nodes with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .
If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the
SUSE download page
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment
sudo vi /etc/auto.master
# Add the following line to the file, save and exit
/- /etc/auto.direct
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASsys
NOTE
Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes.
If the Azure NetApp Files volumes are created as NFSv3 volumes, use the corresponding NFSv3 configuration. If
the Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the instructions to disable ID mapping
and make sure to use the corresponding NFSv4.1 configuration. In this example the Azure NetApp Files
volumes were created as NFSv3 volumes.
sudo vi /etc/waagent.conf
# If using NFSv4.1
sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r
If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group
of the ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
# If using NFSv4.1
sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r
NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.
If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of
the ERS01 folder and retry.
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
cat /usr/sap/sapservices | grep ASCS00 | sudo ssh anftstsapcl2 "cat >>/usr/sap/sapservices"
sudo ssh anftstsapcl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support.
If using enqueue server 2 architecture (ENSA2), define the resources as follows:
sudo crm configure property maintenance-mode="true"
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.1.1.21 anftstsapers
# IP address of all application servers
10.1.1.15 anftstsapa01
10.1.1.16 anftstsapa02
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASpas
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASpas
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASaas
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASaas
sudo vi /etc/waagent.conf
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.
KEY DEFAULT
ENV : 10.1.1.5:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (QAS
in the output above)!
su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>
# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
3. Test HAFailoverToNode
Resource state before starting the test:
# run as root
# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
# Remove migration constraints
anftstsapcl1:~ # crm resource clear rsc_sap_QAS_ASCS00
#INFO: Removed migration constraints for rsc_sap_QAS_ASCS00
Run the following command as root on the node where the ASCS instance is running
If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online:
Online: [ anftstsapcl1 ]
OFFLINE: [ anftstsapcl2 ]
Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=166, status=complete,
exitreason='',
last-rc-change='Fri Mar 8 18:26:10 2019', queued=0ms, exec=0ms
Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean
the failed resources.
# run as root
# list the SBD device(s)
anftstsapcl2:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405b730e31e7d5a4516a2a697dcf;/dev/disk/by-id/scsi-
36001405f69d7ed91ef54461a442c676e;/dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a"
Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS
instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be
lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.
The enqueue lock of transaction su01 should be lost, if using enqueue server replication 1 architecture
and the back-end should have been reset. Resource state after the test:
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the
enqueue server.
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail
over after the ASCS instance is started. Run the following commands as root to clean up the resource
state of the ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
If you only run the command once, sapstart will restart the process. If you run it often enough,
sapstart will not restart the process and the resource will be in a stopped state. Run the following
commands as root to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state
after the test:
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux
12/22/2020 • 28 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations,
installation commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is
used. The names of the resources (for example virtual machines, virtual networks) in the example assume that
you have used the ASCS/SCS template with Resource Prefix NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
Product Documentation for Red Hat Gluster Storage
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on
RHEL
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate
cluster and can be used by multiple SAP systems.
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS
load balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster
Setting up GlusterFS
SAP NetWeaver requires shared storage for the transport and profile directory. Read GlusterFS on Azure VMs
on Red Hat Enterprise Linux for SAP NetWeaver on how to set up GlusterFS for SAP NetWeaver.
Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual
machines. You can use one of the quickstart templates on GitHub to deploy all required resources. The template
deploys the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS template on the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Stack Type
Select the SAP NetWeaver stack type
c. Os Type
Select one of the Linux distributions. For this example, select RHEL 7
d. Db Type
Select HANA
e. Sap System Count
The number of SAP system that run in this cluster. Select 1.
f. System Availability
Select HA
g. Admin Username, Admin Password or SSH key
A new user is created that can be used to sign in to the machine.
h. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should
be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network
name> /subnets/<subnet name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP for
the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS
ERS
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo mount -a
sudo vi /etc/waagent.conf
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo pcs status
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group
of the ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo pcs status
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of
the ERS02 folder and retry.
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers
sudo vi /usr/sap/sapservices
# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ASCS00/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_nw1-ascs -D -u nw1adm
# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ERS02/exe/sapstartsrv pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or
newer and define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
1. [A] Add firewall rules for ASCS and ERS on both nodes
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/waagent.conf
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database for example nw1-db and
10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.
hdbuserstore List
KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: NW1
The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (HN1 in
the output above)!
su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@NW1 SAPABAP1 <password of ABAP schema>
# Remove failed actions for the ERS that occurred as part of the migration
[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02
Run the following command as root on the node where the ASCS instance is running
The status after the node is started again should look like this.
Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-0 'not running' (7): call=45, status=complete,
exitreason='',
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of
the ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@nw1-cl-1 ~]# pgrep er.sapNW1 | xargs kill -9
If you only run the command once, sapstart will restart the process. If you run it often enough,
sapstart will not restart the process and the resource will be in a stopped state. Run the following
commands as root to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
12/22/2020 • 32 minutes to read • Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example
configurations, installation commands etc. ASCS instance is number 00, the ERS instance is number 01, Primary
Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on
RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Red Hat
Linux so far it was necessary to build separate highly available GlusterFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files.
Using Azure NetApp Files for the shared storage eliminates the need for additional GlusterFS cluster.
Pacemaker is still needed for HA of the SAP Netweaver central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the load balancer with
separate front-end IPs for (A)SCS and ERS.
(A )SCS
Frontend configuration
IP address 192.168.14.9
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 192.168.14.10
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details
see Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional
configuration is performed to allow routing to public end points. For details on how to achieve outbound
connectivity see Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in
SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP
timestamps will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For
details see Load Balancer health probes.
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration
on Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the
NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on
Azure NetApp volumes that are mounted on the VMs will be displayed as nobody .
2. [A] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create
the directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 192.168.24.5:/sapQAS
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more details on how to change nfs4_disable_idmapping parameter see
https://access.redhat.com/solutions/1749883.
Create Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker
cluster for this (A)SCS server.
Prepare for SAP NetWeaver installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo vi /etc/fstab
If using NFSv4.1:
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/waagent.conf
# If using NFSv4.1
sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_ASCS
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group of
the ASCS00 folder and retry.
3. [1] Create a virtual IP resource and health-probe for the ERS instance
sudo pcs node unstandby anftstsapcl2
sudo pcs node standby anftstsapcl1
# If using NFSv3
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS
# If using NFSv4.1
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of
the ERS01 folder and retry.
sudo chown qaadm /usr/sap/QAS/ERS01
sudo chgrp sapsys /usr/sap/QAS/ERS01
sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
sudo vi /usr/sap/sapservices
# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ASCS00/exe/sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm
# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm
8. [1] Create the SAP cluster resources
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with
ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server
2 support. If using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-
4.1.1-12.el7.x86_64 or newer and define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
9. [A] Add firewall rules for ASCS and ERS on both nodes Add the firewall rules for ASCS and ERS on both
nodes.
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
192.168.14.10 anftstsapers
192.168.14.7 anftstsapa01
192.168.14.8 anftstsapa02
sudo vi /etc/fstab
If using NFSv4.1:
sudo vi /etc/fstab
sudo mount -a
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=3
# Mount
sudo mount -a
If using NFSv4.1:
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
# Mount
sudo mount -a
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=3
# Mount
sudo mount -a
If using NFSv4.1:
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
# Mount
sudo mount -a
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.
KEY DEFAULT
ENV : 192.168.14.4:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in
the output above)!
su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>
# Remove failed actions for the ERS that occurred as part of the migration
[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
Run the following command as root on the node where the ASCS instance is running
The status after the node is started again should look like this.
Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete,
exitreason='',
Run the following commands as root to identify the process of the message server and kill it.
If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.
Run the following commands as root on the node where the ASCS instance is running to kill the
enqueue server.
The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of
the ASCS and ERS instance after the test.
Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@anftstsapcl2 ~]# pgrep er.sapQAS | xargs kill -9
If you only run the command once, sapstart will restart the process. If you run it often enough,
sapstart will not restart the process and the resource will be in a stopped state. Run the following
commands as root to clean up the resource state of the ERS instance after the test.
Run the following commands as root on the node where the ASCS is running.
The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
SAP ASCS/SCS instance multi-SID high availability
with Windows server failover clustering and Azure
shared disk
12/22/2020 • 15 minutes to read • Edit Online
Windows
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster with Azure shared disk. When this process is completed, you have configured an SAP multi-SID
cluster.
IMPORTANT
When deploying SAP ASCS/SCS Windows Failover cluster with Azure shared disk, be aware that your deployment will be
operating with a single shared disk in one storage cluster. Your SAP ASCS/SCS instance will be impacted, in case of issues with
the storage cluster, where the Azure shared disk is deployed.
IMPORTANT
The setup must meet the following conditions:
Each database management system (DBMS) SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in the same cluster is not supported.
Supported OS versions
Both Windows Server 2016 and Windows Server 2019 are supported (use the latest data center images).
We strongly recommend using Windows Ser ver 2019 Datacenter , as:
Windows 2019 Failover Cluster Service is Azure aware
There is added integration and awareness of Azure Host Maintenance and improved experience by monitoring
for Azure schedule events.
It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a
dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on Azure
Internal Load Balancer.
Architecture
Both Enqueue replication server 1 (ERS1) and Enqueue replication server 2 (ERS2) are supported in multi-SID
configuration. A mix of ERS1 and ERS2 is not supported in the same cluster.
1. The first example shows two SAP SIDs, both with ERS1 architecture where:
SAP SID1 is deployed on shared disk, with ERS1. The ERS instance is installed on local host and on
local drive. SAP SID1 has its own (virtual) IP address (SID1 (A)SCS IP1), which is configured on the
Azure Internal Load balancer.
SAP SID2 is deployed on shared disk, with ERS1. The ERS instance is installed on local host and on
local drive. SAP SID2 has own (virtual) IP address (SID2 (A)SCS IP2), which is configured also on the
Azure Internal Load balancer.
2. The second example shows two SAP SIDs, both with ERS2 architecture where:
SAP SID1 with ERS2, is which also clustered and is deployed on local drive.
SAP SID1 has own (virtual) IP address (SID1 (A)SCS IP1), which is configured on the Azure Internal
Load balancer. SAP ERS2, used by SAP SID1 system has its own (virtual) IP address (SID1 ERS2 IP2),
which is configured on the Azure Internal Load balancer.
SAP SID2 with ERS2, is which also clustered and is deployed on local drive.
SAP SID2 has own (virtual) IP address (SID2 (A)SCS IP3), which is configured on the Azure Internal
Load balancer. SAP ERS2, used by SAP SID2 system has its own (virtual) IP address (SID2 ERS2 IP4),
which is configured on the Azure Internal Load balancer.
Here we have a total of four virtual IP addresses:
SID1 (A)SCS IP1
SID2 ERS2 IP2
SID2 (A)SCS IP3
SID2 ERS2 IP4
Infrastructure preparation
We'll install a new SAP SID PR2 , in addition to the existing clustered SAP PR1 ASCS/SCS instance.
Host names and IP addresses
P RO XIM IT Y
H O ST N A M E RO L E H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET P L A C EM EN T GRO UP
# Format SAP ASCS Disk number '3', with drive letter 'S'
$SAPSID = "PR2"
$DiskNumber = 3
$DriveLetter = "S"
$DiskLabel = "$SAPSID" + "SAP"
Create a virtual host name for the clustered SAP ASCS/SCS instance
1. Create a DNS entry for the virtual host name for new the SAP ASCS/SCS instance in the Windows DNS
manager.
The IP address you assign to the virtual host name in DNS must be the same as the IP address you assigned
in Azure Load Balancer.
Define the DNS entry for the SAP ASCS/SCS cluster virtual name and IP address
2. If using SAP Enqueue Replication Server 2, which is also clustered instance, then you need to reserve in DNS
a virtual host name for ERS2 as well. The IP address you assign to the virtual host name for ERS2 in DNS
must be the same as the IP address you assigned in Azure Load Balancer.
Define the DNS entry for the SAP ERS2 cluster virtual name and IP address
3. To define the IP address that's assigned to the virtual host name, select DNS Manager > Domain .
New virtual name and TCP/IP address for SAP ASCS/SCS and ERS2 cluster configuration
SAP Installation
Install the SAP first cluster node
Follow the SAP described installation procedure. Make sure in the start installation option “First Cluster Node”, and
to choose “Cluster Shared Disk” as configuration option.
Choose the newly create shared disk.
Modify the SAP profile of the ASCS/SCS instance
If you are running Enqueue Replication Server 1, add SAP profile parameter enque/encni/set_so_keepalive as
described below. The profile parameter prevents connections between SAP work processes and the enqueue
server from closing when they are idle for too long. The SAP parameter is not required for ERS2.
1. Add this profile parameter to the SAP ASCS/SCS instance profile, if using ERS1.
enque/encni/set_so_keepalive = true
For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
2. To apply the SAP profile parameter changes, restart the SAP ASCS/SCS instance.
Configure probe port on the cluster resource
Use the internal load balancer's probe functionality to make the entire cluster configuration work with Azure Load
Balancer. The Azure internal load balancer usually distributes the incoming workload equally between participating
virtual machines.
However, this won't work in some cluster configurations because only one instance is active. The other instance is
passive and can’t accept any of the workload. A probe functionality helps when the Azure internal load balancer
detect which instance is active, and only target the active instance.
IMPORTANT
In this example configuration, the ProbePor t is set to 620Nr . For SAP ASCS instance with number 02 it is 62002 . You will
need to adjust the configuration to match your SAP instance numbers and your SAP SID.
To add a probe port run this PowerShell Module on one of the cluster VMs:
In the case of SAP ASC/SCS Instance with instance number 02
If using ERS2, with instance number 12 , which is clustered. There is no need to configure probe port for
ERS1, as it is not clustered.
function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {
<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.
.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.
It will also restart SAP Cluster group (default behavior), to activate the changes.
You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.
Expectation is that SAP group is installed with official SWPM installation tool, which will set default
Expectation is that SAP group is installed with official SWPM installation tool, which will set default
expected naming convention for:
- SAP Cluster Group: 'SAP $SAPSID'
- SAP Cluster IP Address Resource: 'SAP $SAPSID IP'
.PARAMETER SAPSID
SAP SID - 3 characters staring with letter.
.PARAMETER ProbePort
Azure Load Balancer Health Check Probe Port.
.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is '$True', so SAP cluster group will be restarted to activate the changes.
.PARAMETER IsSAPERSClusteredInstance
Optional parameter.Default value is '$False'.
If set to $True , then handle clsutered new SAP ERS2 instance.
.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP', and restart the SAP cluster group 'SAP AB1',
to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000
.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP'. SAP cluster group 'SAP AB1' IS NOT
restarted, therefore changes are NOT active.
# To activate the changes you need to manually restart 'SAP AB1' cluster group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
RestartSAPClusterGroup $False
.EXAMPLE
# Set probe port to 62001, on SAP cluster resource 'SAP AB1 ERS IP'. SAP cluster group 'SAP AB1 ERS' IS
restarted, to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
IsSAPERSClusteredInstance $True
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,
[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,
[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
BEGIN{}
PROCESS{
try{
if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS Instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS Instance
#Handle clustered SAP ASCS/SCS Instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}
#$ActivateChanges = Read-Host "Do you want to take restart SAP cluster role
'$SAPClusterRoleName', to activate the changes (yes/no)?"
if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."
}
END {}
}
2. Restart cluster node A within the Windows guest operating system. This initiates an automatic failover of
the SAP <SID> cluster group from node A to node B.
3. Restart cluster node A from the Azure portal. This initiates an automatic failover of the SAP <SID> cluster
group from node A to node B.
4. Restart cluster node A by using Azure PowerShell. This initiates an automatic failover of the SAP <SID>
cluster group from node A to node B.
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an SAP
ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an SAP ASCS/SCS instance
SAP ASCS/SCS instance multi-SID high availability
with Windows Server Failover Clustering and shared
disk on Azure
12/22/2020 • 8 minutes to read • Edit Online
Windows
If you have an SAP deployment, you must use an internal load balancer to create a Windows cluster configuration
for SAP Central Services (ASCS/SCS) instances.
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster with shared disk, using SIOS to simulate shared disk. When this process is completed, you have
configured an SAP multi-SID cluster.
NOTE
This feature is available only in the Azure Resource Manager deployment model.
There is a limit on the number of private front-end IPs for each Azure internal load balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to the maximum number of private front-
end IPs for each Azure internal load balancer.
For more information about load-balancer limits, see the "Private front-end IP per load balancer" section in
Networking limits: Azure Resource Manager.
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure
PowerShell.
Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by using file share , as shown
in this diagram.
IMPORTANT
The setup must meet the following conditions:
The SAP ASCS/SCS instances must share the same WSFC cluster.
Each database management system (DBMS) SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in the same cluster is not supported.
PA RA M ET ER N A M E VA L UE
You can install additional SAP ASCS/SCS instances in the existing WSFC cluster with two nodes:
Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS server
You can create a DNS entry for the virtual host name of the ASCS/SCS instance by using the following
parameters:
pr5-sap-cl 10.0.0.50
The new host name and IP address are displayed in DNS Manager, as shown in the following screenshot:
NOTE
The new IP address that you assign to the virtual host name of the additional ASCS/SCS instance must be the same as the
new IP address that you assigned to the SAP Azure load balancer.
In our scenario, the IP address is 10.0.0.50.
$count = $ILB.FrontendIpConfigurations.Count + 1
$FrontEndConfigurationName ="lbFrontendASCS$count"
$LBProbeName = "lbProbeASCS$count"
Write-Host "Creating load balancing rules for the ports: '$Ports' ... " -ForegroundColor Green
$ILB | Set-AzLoadBalancer
Write-Host "Successfully added new IP '$ILBIP' to the internal load balancer '$ILBName'!" -ForegroundColor
Green
After the script has run, the results are displayed in the Azure portal, as shown in the following screenshot:
Add disks to cluster machines, and configure the SIOS cluster-share disk
You must add a new cluster-share disk for each additional SAP ASCS/SCS instance. For Windows Server 2012 R2,
the WSFC cluster share disk currently in use is the SIOS DataKeeper software solution.
Do the following:
1. Add an additional disk or disks of the same size (which you need to stripe) to each of the cluster nodes, and
format them.
2. Configure storage replication with SIOS DataKeeper.
This procedure assumes that you have already installed SIOS DataKeeper on the WSFC cluster machines. If you
have installed it, you must now configure replication between the machines. The process is described in detail in
Install SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk.
Deploy VMs for SAP application servers and the DBMS cluster
To complete the infrastructure preparation for the second SAP system, do the following:
1. Deploy dedicated VMs for the SAP application servers, and put each in its own dedicated availability group.
2. Deploy dedicated VMs for the DBMS cluster, and put each in its own dedicated availability group.
Next steps
Networking limits: Azure Resource Manager
Multiple VIPs for Azure Load Balancer
SAP ASCS/SCS instance multi-SID high availability
with Windows Server Failover Clustering and file
share on Azure
12/22/2020 • 6 minutes to read • Edit Online
Windows
You can manage multiple virtual IP addresses by using an Azure internal load balancer.
If you have an SAP deployment, you can use an internal load balancer to create a Windows cluster configuration
for SAP Central Services (ASCS/SCS) instances.
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster with file share . When this process is completed, you have configured an SAP multi-SID cluster.
NOTE
This feature is available only in the Azure Resource Manager deployment model.
There is a limit on the number of private front-end IPs for each Azure internal load balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to the maximum number of private front-
end IPs for each Azure internal load balancer.
The configuration introduced in this documentation is not yet supported to be used for Azure Availability Zones
For more information about load-balancer limits, see the "Private front-end IP per load balancer" section in
Networking limits: Azure Resource Manager. Also consider using the Azure Standard Load Balancer SKU instead of
the basic SKU of the Azure load balancer.
Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by using file share , as shown
in this diagram.
Figure 1: An SAP ASCS/SCS instance and SOFS deployed in two clusters
IMPORTANT
The setup must meet the following conditions:
The SAP ASCS/SCS instances must share the same WSFC cluster.
Different SAP Global Hosts file shares belonging to different SAP SIDs must share the same SOFS cluster.
Each database management system (DBMS) SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in the same cluster is not supported.
IMPORTANT
For the second SAP <SID2> system, the same Volume1 and the same <SAPGlobalHost> network name are used.
Because you have already set SAPMNT as the share name for various SAP systems, to reuse the <SAPGlobalHost>
network name, you must use the same Volume1 .
The file path for the <SID2> global host is C:\ClusterStorage\Volume1 \usr\sap<SID2>\SYS.
For the <SID2> system, you must prepare the SAP Global Host ..\SYS.. folder on the SOFS cluster.
To prepare the SAP Global Host for the <SID2> instance, execute the following PowerShell script:
##################
# SAP multi-SID
##################
$SAPSID2 = "PR2"
$DomainName2 = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName2 = "$DomainName2\SAP_" + $SAPSID2 + "_GlobalAdmin"
$UsrSAPFolder = "C:\ClusterStorage\Volume1\usr\sap\"
# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
Prepare the infrastructure on the SOFS cluster by using a different SAP Global Host
You can configure the second SOFS (for example, the second SOFS cluster role with <SAPGlobalHost2> and a
different Volume2 for the second <SID2> ).
Figure 4: Multi-SID SOFS is the same as SAP GLOBAL host name 2
To create the second SOFS role with <SAPGlobalHost2>, execute this PowerShell script:
$UsrSAPFolder = "C:\ClusterStorage\Volume2\usr\sap\"
# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
To create a SAPMNT file share on Volume2 with the <SAPGlobalHost2> host name for the second SAP <SID2>,
start the Add File Share wizard in Failover Cluster Manager.
Right-click the saoglobal2 SOFS cluster group, and then select Add File Share .
Figure 6: Start “Add File Share” wizard
Figure 11: Assign "Full control" to user group and computer accounts
Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks: Official SAP guidelines for an HA file
share
Storage spaces direct in Windows Server 2016
Scale-out file server for application data overview
What's new in storage in Windows Server 2016
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP
applications multi-SID guide
12/22/2020 • 27 minutes to read • Edit Online
This article describes how to deploy multiple SAP NetWeaver or S4HANA highly available systems(that is,
multi-SID) in a two node cluster on Azure VMs with SUSE Linux Enterprise Server for SAP applications.
In the example configurations, installation commands etc. three SAP NetWeaver 7.50 systems are deployed in a
single, two node high availability cluster. The SAP systems SIDs are:
NW1 : ASCS instance number 00 and virtual host name msnw1ascs ; ERS instance number 02 and virtual
host name msnw1ers .
NW2 : ASCS instance number 10 and virtual hostname msnw2ascs ; ERS instance number 12 and virtual
host name msnw2ers .
NW3 : ASCS instance number 20 and virtual hostname msnw3ascs ; ERS instance number 22 and virtual
host name msnw3ers .
The article doesn't cover the database layer and the deployment of the SAP NFS shares. In the examples in this
article, we are using virtual names nw2-nfs for the NW2 NFS shares and nw3-nfs for the NW3 NFS shares,
assuming that NFS cluster was deployed.
Before you begin, refer to the following SAP Notes and papers first:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP
Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA
and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide
much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
SUSE multi-SID cluster guide for SLES 12 and SLES 15
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
The virtual machines that participate in the cluster must be sized to be able to run all resources, in case failover
occurs. Each SAP SID can fail over independent from each other in the multi-SID high availability cluster. If
using SBD fencing, the SBD devices can be shared between multiple clusters.
To achieve high availability, SAP NetWeaver requires highly available NFS shares. In this example, we assume
the SAP NFS shares are either hosted on highly available NFS file server, which can be used by multiple SAP
systems. Or the shares are deployed on Azure NetApp Files NFS volumes.
IMPORTANT
The support for multi-SID clustering of SAP ASCS/ERS with SUSE Linux as guest operating system in Azure VMs is limited
to five SAP SIDs on the same cluster. Each new SID increases the complexity. A mix of SAP Enqueue Replication Server 1
and Enqueue Replication Server 2 on the same cluster is not suppor ted . Multi-SID clustering describes the installation
of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only
supported for ASCS/ERS.
TIP
The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is more complex to implement. It also
involves higher administrative effort, when executing maintenance activities (like OS patching). Before you start the
actual implementation, take time to carefully plan out the deployment and all involved components like VMs, NFS
mounts, VIPs, load balancer configurations and so on.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database
use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address.
We recommend using Standard load balancer.
The following list shows the configuration of the (A)SCS and ERS load balancer for this multi-SID cluster
example with three SAP systems. You will need separate frontend IP, health probes, and load-balancing rules for
each ASCS and ERS instance for each of the SIDs. Assign all VMs, that are part of the ASCS/ASCS cluster to one
backend pool.
(A )SCS
Frontend configuration
IP address for NW1: 10.3.1.14
IP address for NW2: 10.3.1.16
IP address for NW3: 10.3.1.13
Probe Ports
Port 620<nr> , therefore for NW1, NW2, and NW3 probe ports 62000 , 62010 and 62020
Load-balancing rules -
create one for each instance, that is, NW1/ASCS, NW2/ASCS and NW3/ASCS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address for NW1 10.3.1.15
IP address for NW2 10.3.1.17
IP address for NW3 10.3.1.19
Probe Port
Port 621<nr> , therefore for NW1, NW2, and N# probe ports 62102 , 62112 and 62122
Load-balancing rules - create one for each instance, that is, NW1/ERS, NW2/ERS and NW3/ERS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause
the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
TIP
Always test the fail over functionality of the cluster, after the first system is deployed, before adding the additional SAP
SIDs to the cluster. That way you will know that the cluster functionality works, before adding the complexity of
additional SAP systems to the cluster.
Deploy additional SAP systems in the cluster
In this example, we assume that system NW1 was already deployed in the cluster. We will show how to deploy
in the cluster SAP systems NW2 and NW3 .
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
Prerequisites
IMPORTANT
Before following the instructions to deploy additional SAP systems in the cluster, follow the instructions to deploy the
first SAP system in the cluster, as there are steps which are only necessary during the first system deployment.
sudo vi /etc/hosts
# IP address of the load balancer frontend configuration for NW2 ASCS
10.3.1.16 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.13 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.17 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.19 msnw3ers
# IP address for virtual host name for the NFS server for NW2
10.3.1.31 nw2-nfs
# IP address for virtual host name for the NFS server for NW3
10.3.1.32 nw3-nfs
3. [A] Create the shared directories for the additional NW2 and NW3 SAP systems that you are deploying
to the cluster.
sudo mkdir -p /sapmnt/NW2
sudo mkdir -p /usr/sap/NW2/SYS
sudo mkdir -p /usr/sap/NW2/ASCS10
sudo mkdir -p /usr/sap/NW2/ERS12
sudo mkdir -p /sapmnt/NW3
sudo mkdir -p /usr/sap/NW3/SYS
sudo mkdir -p /usr/sap/NW3/ASCS20
sudo mkdir -p /usr/sap/NW3/ERS22
4. [A] Configure autofs to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional
SAP systems that you are deploying to the cluster. In this example NW2 and NW3 .
Update file /etc/auto.direct with the file systems for the additional SAP systems that you are
deploying to the cluster.
If using NFS file server, follow the instructions here
If using Azure NetApp Files, follow the instructions here
You will need to restart the autofs service to mount the newly added shares.
Install ASCS / ERS
1. Create the virtual IP and health probe cluster resources for the ASCS instance of the additional SAP
system you are deploying to the cluster. The example shown here is for NW2 and NW3 ASCS, using
highly available NFS server.
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the
floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we
recommend using azure-lb resource agent, which is part of package resource-agents, with the following package
version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure
Load-Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
sudo crm configure primitive fs_NW2_ASCS Filesystem device='nw2-nfs:/NW2/ASCS'
directory='/usr/sap/NW2/ASCS10' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
As you creating the resources they may be assigned to different cluster resources. When you group
them, they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all
resources are started. It is not important on which node the resources are running.
2. [1] Install SAP NetWeaver ASCS
Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP address of the load
balancer frontend configuration for the ASCS. For example, for system NW2 , the virtual hostname is
msnw2ascs , 10.3.1.16 and the instance number that you used for the probe of the load balancer, for
example 10 . for system NW3 , the virtual hostname is msnw3ascs , 10.3.1.13 and the instance number
that you used for the probe of the load balancer, for example 20 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
If the installation fails to create a subfolder in /usr/sap/SID /ASCSInstance# , try setting the owner to
sid adm and group to sapsys of the ASCSInstance# and retry.
3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP
system you are deploying to the cluster. The example shown here is for NW2 and NW3 ERS, using
highly available NFS server.
sudo crm configure primitive fs_NW2_ERS Filesystem device='nw2-nfs:/NW2/ASCSERS'
directory='/usr/sap/NW2/ERS12' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
As you creating the resources they may be assigned to different cluster nodes. When you group them,
they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are
started.
Next, make sure that the resources of the newly created ERS group, are running on the cluster node,
opposite to the cluster node where the ASCS instance for the same SAP system was installed. For
example, if NW2 ASCS was installed on slesmsscl1 , then make sure the NW2 ERS group is running on
slesmsscl2 . You can migrate the NW2 ERS group to slesmsscl2 by running the following command:
NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.
If the installation fails to create a subfolder in /usr/sap/NW2 /ERSInstance# , try setting the owner to
sid adm and the group to sapsys of the ERSInstance# folder and retry.
If it was necessary for you to migrate the ERS group of the newly deployed SAP system to a different
cluster node, don't forget to remove the location constraint for the ERS group. You can remove the
constraint by running the following command (the example is given for SAP systems NW2 and NW3 ).
5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example
shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances
added to the cluster.
ASCS/SCS profile
sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile
sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
6. [A] Configure the SAP users for the newly deployed SAP system, in this example NW2 and NW3 .
7. Add the ASCS and ERS SAP services for the newly installed SAP system to the sapservice file. The
example shown below is for SAP systems NW2 and NW3 .
Add the ASCS service entry to the second node and copy the ERS service entry to the first node. Execute
the commands for each SAP system on the node, where the ASCS instance for the SAP system was
installed.
# Execute the following commands on slesmsscl1,assuming the NW2 ASCS instance was installed on
slesmsscl1
cat /usr/sap/sapservices | grep ASCS10 | sudo ssh slesmsscl2 "cat >>/usr/sap/sapservices"
sudo ssh slesmsscl2 "cat /usr/sap/sapservices" | grep ERS12 | sudo tee -a /usr/sap/sapservices
# Execute the following commands on slesmsscl2, assuming the NW3 ASCS instance was installed on
slesmsscl2
cat /usr/sap/sapservices | grep ASCS20 | sudo ssh slesmsscl1 "cat >>/usr/sap/sapservices"
sudo ssh slesmsscl1 "cat /usr/sap/sapservices" | grep ERS22 | sudo tee -a /usr/sap/sapservices
8. [1] Create the SAP cluster resources for the newly installed SAP system.
If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems NW2 and NW3
as follows:
sudo crm configure property maintenance-mode="true"
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with
ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue
server 2 support. If using enqueue server 2 architecture (ENSA2), define the resources for SAP systems
NW2 and NW3 as follows:
sudo crm configure property maintenance-mode="true"
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running. The following example shows the cluster resources status, after SAP
systems NW2 and NW3 were added to the cluster.
sudo crm_mon -r
The following picture shows how the resources would look like in the HA Web Konsole(Hawk), with the
resources for SAP system NW2 expanded.
# 10.12.2019 21:33:08
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
(sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2
# 19.12.2019 21:19:58
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
# 10.12.2019 21:37:09
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
(sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode: slesmsscl2
# HANodes: slesmsscl2, slesmsscl1
# 19.12.2019 21:17:39
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
# 10.12.2019 23:35:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
(sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2
# 19.12.2019 21:10:42
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
2. Manually migrate the ASCS instance. The example shows migrating the ASCS instance for SAP system
NW2.
Resource state, before starting the test:
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Run the following commands as root to migrate the NW2 ASCS instance.
# Remove failed actions for the ERS that occurred as part of the migration
crm resource cleanup rsc_sap_NW2_ERS12
3. Test HAFailoverToNode. The test presented here shows migrating the ASCS instance for SAP system
NW2.
Resource state before starting the test:
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Run the following commands as nw2 adm to migrate the NW2 ASCS instance.
slesmsscl2:nw2adm 53> sapcontrol -nr 10 -host msnw2ascs -user nw2adm password -function
HAFailoverToNode ""
# run as root
# Remove failed actions for the ERS that occurred as part of the migration
crm resource cleanup rsc_sap_NW2_ERS12
# Remove migration constraints
crm resource clear rsc_sap_NW2_ASCS10
#INFO: Removed migration constraints for rsc_sap_NW2_ASCS10
Run the following command as root on the node where at least one ASCS instance is running. In this
example, we executed the command on slesmsscl2 , where the ASCS instances for NW1 and NW3 are
running.
If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online: [ slesmsscl1 ]
OFFLINE: [ slesmsscl2 ]
Full list of resources:
Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean
the failed resources.
# run as root
# list the SBD device(s)
cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# output is like:
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on Red Hat Enterprise Linux for SAP applications
multi-SID guide
12/22/2020 • 24 minutes to read • Edit Online
This article describes how to deploy multiple SAP NetWeaver highly available systems(that is, multi-SID) in a two
node cluster on Azure VMs with Red Hat Enterprise Linux for SAP applications.
In the example configurations, installation commands etc. three SAP NetWeaver 7.50 systems are deployed in a
single, two node high availability cluster. The SAP systems SIDs are:
NW1 : ASCS instance number 00 and virtual host name msnw1ascs ; ERS instance number 02 and virtual
host name msnw1ers .
NW2 : ASCS instance number 10 and virtual hostname msnw2ascs ; ERS instance number 12 and virtual
host name msnw2ers .
NW3 : ASCS instance number 20 and virtual hostname msnw3ascs ; ERS instance number 22 and virtual
host name msnw3ers .
The article doesn't cover the database layer and the deployment of the SAP NFS shares. In the examples in this
article, we are using Azure NetApp Files volume sapMSID for the NFS shares, assuming that the volume is
already deployed. We are also assuming, that the Azure NetApp Files volume is deployed with NFSv3 protocol
and that the following file paths exist for the cluster resources for the ASCS and ERS instances of SAP systems
NW1, NW2 and NW3:
volume sapMSID (nfs://10.42.0.4/sapmntNW1 )
volume sapMSID (nfs://10.42.0.4/usrsapNW1 ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW1 sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW1 ers)
volume sapMSID (nfs://10.42.0.4/sapmntNW2 )
volume sapMSID (nfs://10.42.0.4/usrsapNW2 ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW2 sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW2 ers)
volume sapMSID (nfs://10.42.0.4/sapmntNW3 )
volume sapMSID (nfs://10.42.0.4/usrsapNW3 ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW3 sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW3 ers)
Before you begin, refer to the following SAP Notes and papers first:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
Azure NetApp Files documentation
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Overview
The virtual machines, that participate in the cluster must be sized to be able to run all resources, in case failover
occurs. Each SAP SID can fail over independent from each other in the multi-SID high availability cluster.
To achieve high availability, SAP NetWeaver requires highly available shares. In this documentation, we present
the examples with the SAP shares deployed on Azure NetApp Files NFS volumes. It is also possible to host the
shares on highly available GlusterFS cluster, which can be used by multiple SAP systems.
IMPORTANT
The support for multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest operating system in Azure VMs is
limited to five SAP SIDs on the same cluster. Each new SID increases the complexity. A mix of SAP Enqueue Replication
Server 1 and Enqueue Replication Server 2 on the same cluster is not suppor ted . Multi-SID clustering describes the
installation of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster. Currently multi-SID clustering
is only supported for ASCS/ERS.
TIP
The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is more complex to implement. It also
involves higher administrative effort, when executing maintenance activities (like OS patching). Before you start the actual
implementation, take time to carefully plan out the deployment and all involved components like VMs, NFS mounts, VIPs,
load balancer configurations and so on.
SAP NetWeaver ASCS, SAP NetWeaver SCS and SAP NetWeaver ERS use virtual hostname and virtual IP
addresses. On Azure, a load balancer is required to use a virtual IP address. We recommend using Standard load
balancer.
The following list shows the configuration of the (A)SCS and ERS load balancer for this multi-SID cluster example
with three SAP systems. You will need separate frontend IP, health probes, and load-balancing rules for each
ASCS and ERS instance for each of the SIDs. Assign all VMs, that are part of the ASCS/ASCS cluster to one
backend pool of a single ILB.
(A )SCS
Frontend configuration
IP address for NW1: 10.3.1.50
IP address for NW2: 10.3.1.52
IP address for NW3: 10.3.1.54
Probe Ports
Port 620<nr> , therefore for NW1, NW2, and NW3 probe ports 62000 , 62010 and 62020
Load-balancing rules - create one for each instance, that is, NW1/ASCS, NW2/ASCS and NW3/ASCS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address for NW1 10.3.1.51
IP address for NW2 10.3.1.53
IP address for NW3 10.3.1.55
Probe Port
Port 621<nr> , therefore for NW1, NW2, and N3 probe ports 62102 , 62112 and 62122
Load-balancing rules - create one for each instance, that is, NW1/ERS, NW2/ERS and NW3/ERS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual
Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
SAP shares
SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP
system, it is important to have highly available shares. You will need to decide on the architecture for your SAP
shares. One option is to deploy the shares on Azure NetApp Files NFS volumes. With Azure NetApp Files, you will
get built-in high availability for the SAP NFS shares.
Another option is to build GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver, which can be
shared between multiple SAP systems.
TIP
Always test the fail over functionality of the cluster, after the first system is deployed, before adding the additional SAP
SIDs to the cluster. That way you will know that the cluster functionality works, before adding the complexity of additional
SAP systems to the cluster.
IMPORTANT
Before following the instructions to deploy additional SAP systems in the cluster, follow the instructions to deploy the first
SAP system in the cluster, as there are steps which are only necessary during the first system deployment.
sudo vi /etc/hosts
# IP address of the load balancer frontend configuration for NW2 ASCS
10.3.1.52 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.54 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.53 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.55 msnw3ers
3. [A] Create the shared directories for the additional NW2 and NW3 SAP systems that you are deploying
to the cluster.
4. [A] Add the mount entries for the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP
systems that you are deploying to the cluster. In this example NW2 and NW3 .
Update file /etc/fstab with the file systems for the additional SAP systems that you are deploying to the
cluster.
If using Azure NetApp Files, follow the instructions here
If using GlusterFS cluster, follow the instructions here
Install ASCS / ERS
1. Create the virtual IP and health probe cluster resources for the ASCS instances of the additional SAP
systems you are deploying to the cluster. The example shown here is for NW2 and NW3 ASCS, using NFS
on Azure NetApp Files volumes with NFSv3 protocol.
sudo pcs resource create fs_NW2_ASCS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW2ascs' \
directory='/usr/sap/NW2/ASCS10' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-NW2_ASCS
Make sure the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
2. [1] Install SAP NetWeaver ASCS
Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP address of the load
balancer frontend configuration for the ASCS. For example, for system NW2 , the virtual hostname is
msnw2ascs , 10.3.1.52 and the instance number that you used for the probe of the load balancer, for
example 10 . For system NW3 , the virtual hostname is msnw3ascs , 10.3.1.54 and the instance number
that you used for the probe of the load balancer, for example 20 . Note down on which cluster node you
installed ASCS for each SAP SID.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname
If the installation fails to create a subfolder in /usr/sap/SID /ASCSInstance# , try setting the owner to
sid adm and group to sapsys of the ASCSInstance# and retry.
3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP
system you are deploying to the cluster. The example shown here is for NW2 and NW3 ERS, using NFS
on Azure NetApp Files volumes with NFSv3 protocol.
sudo pcs resource create fs_NW2_AERS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW2ers' \
directory='/usr/sap/NW2/ERS12' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-NW2_AERS
Make sure the cluster status is ok and that all resources are started.
Next, make sure that the resources of the newly created ERS group, are running on the cluster node,
opposite to the cluster node where the ASCS instance for the same SAP system was installed. For example,
if NW2 ASCS was installed on rhelmsscl1 , then make sure the NW2 ERS group is running on rhelmsscl2
. You can migrate the NW2 ERS group to rhelmsscl2 by running the following command for one of the
cluster resources in the group:
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname
NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.
If the installation fails to create a subfolder in /usr/sap/NW2 /ERSInstance# , try setting the owner to
sid adm and the group to sapsys of the ERSInstance# folder and retry.
If it was necessary for you to migrate the ERS group of the newly deployed SAP system to a different
cluster node, don't forget to remove the location constraint for the ERS group. You can remove the
constraint by running the following command (the example is given for SAP systems NW2 and NW3 ).
Make sure to remove the temporary constraints for the same resource you used in the command to move
the ERS cluster group.
5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example
shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances
added to the cluster.
ASCS/SCS profile
sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in
SAP note 1410736.
ERS profile
sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
# On the node where ASCS was installed, comment out the line for the ASCS instacnes
#LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW2/ASCS10/exe/sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm
#LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW3/ASCS20/exe/sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm
# On the node where ERS was installed, comment out the line for the ERS instacnes
#LD_LIBRARY_PATH=/usr/sap/NW2/ERS12/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW2/ERS12/exe/sapstartsrv pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers -D -u nw2adm
#LD_LIBRARY_PATH=/usr/sap/NW3/ERS22/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW3/ERS22/exe/sapstartsrv pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers -D -u nw3adm
7. [1] Create the SAP cluster resources for the newly installed SAP system.
If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems NW2 and NW3 as
follows:
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with
ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server
2 support. If using enqueue server 2 architecture (ENSA2), define the resources for SAP systems NW2
and NW3 as follows:
sudo pcs property set maintenance-mode=true
If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running. The following example shows the cluster resources status, after SAP systems
NW2 and NW3 were added to the cluster.
sudo pcs status
8. [A] Add firewall rules for ASCS and ERS on both nodes. The example below shows the firewall rules for
both SAP systems NW2 and NW3 .
# NW2 - ASCS
sudo firewall-cmd --zone=public --add-port=62010/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62010/tcp
sudo firewall-cmd --zone=public --add-port=3210/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3210/tcp
sudo firewall-cmd --zone=public --add-port=3610/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3610/tcp
sudo firewall-cmd --zone=public --add-port=3910/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3910/tcp
sudo firewall-cmd --zone=public --add-port=8110/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8110/tcp
sudo firewall-cmd --zone=public --add-port=51013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51013/tcp
sudo firewall-cmd --zone=public --add-port=51014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51014/tcp
sudo firewall-cmd --zone=public --add-port=51016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51016/tcp
# NW2 - ERS
sudo firewall-cmd --zone=public --add-port=62112/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62112/tcp
sudo firewall-cmd --zone=public --add-port=3312/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3312/tcp
sudo firewall-cmd --zone=public --add-port=51213/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51213/tcp
sudo firewall-cmd --zone=public --add-port=51214/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51214/tcp
sudo firewall-cmd --zone=public --add-port=51216/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51216/tcp
# NW3 - ASCS
sudo firewall-cmd --zone=public --add-port=62020/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62020/tcp
sudo firewall-cmd --zone=public --add-port=3220/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3220/tcp
sudo firewall-cmd --zone=public --add-port=3620/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3620/tcp
sudo firewall-cmd --zone=public --add-port=3920/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3920/tcp
sudo firewall-cmd --zone=public --add-port=8120/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8120/tcp
sudo firewall-cmd --zone=public --add-port=52013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52013/tcp
sudo firewall-cmd --zone=public --add-port=52014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52014/tcp
sudo firewall-cmd --zone=public --add-port=52016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52016/tcp
# NW3 - ERS
sudo firewall-cmd --zone=public --add-port=62122/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62122/tcp
sudo firewall-cmd --zone=public --add-port=3322/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3322/tcp
sudo firewall-cmd --zone=public --add-port=52213/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52213/tcp
sudo firewall-cmd --zone=public --add-port=52214/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52214/tcp
sudo firewall-cmd --zone=public --add-port=52216/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52216/tcp
Run the following commands as root to migrate the NW3 ASCS instance.
# Remove failed actions for the ERS that occurred as part of the migration
pcs resource cleanup rsc_sap_NW3_ERS22
Resource state after the test:
Run the following command as root on a node, where at least one ASCS instance is running. In this
example, we executed the command on rhelmsscl1 , where the ASCS instances for NW1, NW2 and NW3
are running.
The status after the test, and after the node, that was crashed has started again, should look like this.
Full list of resources:
If there are messages for failed resources, clean the status of the failed resources. For example:
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
About disaster recovery for on-premises apps
3/19/2020 • 8 minutes to read • Edit Online
This article describes on-premises workloads and apps you can protect for disaster recovery with the Azure Site
Recovery service.
Overview
Organizations need a business continuity and disaster recovery (BCDR) strategy to keep workloads and data safe
and available during planned and unplanned downtime. And, recover to regular working conditions.
Site Recovery is an Azure service that contributes to your BCDR strategy. Using Site Recovery, you can deploy
application-aware replication to the cloud, or to a secondary site. You can use Site Recovery to manage replication,
perform disaster recovery testing, and run failovers and failback. Your apps can run on Windows or Linux-based
computers, physical servers, VMware, or Hyper-V.
Site Recovery integrates with Microsoft applications such as SharePoint, Exchange, Dynamics, SQL Server, and
Active Directory. Microsoft works closely with leading vendors including Oracle, SAP, and Red Hat. You can
customize replication solutions on an app-by-app basis.
Workload summary
Site Recovery can replicate any app running on a supported machine. We've partnered with product teams to do
additional testing for the apps specified in the following table.
REP L IC AT E REP L IC AT E
REP L IC AT E H Y P ER- V VM S TO REP L IC AT E VM WA RE VM S TO REP L IC AT E
A Z URE VM S TO A SEC O N DA RY H Y P ER- V VM S TO A SEC O N DA RY VM WA RE VM S TO
W O RK LO A D A Z URE SIT E A Z URE SIT E A Z URE
SAP Yes (tested by Yes (tested by Yes (tested by Yes (tested by Yes (tested by
Microsoft) Microsoft) Microsoft) Microsoft) Microsoft)
Replicate SAP site
to Azure for non-
cluster
Linux (operating Yes (tested by Yes (tested by Yes (tested by Yes (tested by Yes (tested by
system and apps) Microsoft) Microsoft) Microsoft) Microsoft) Microsoft)
Protect SharePoint
Azure Site Recovery helps protect SharePoint deployments, as follows:
Eliminates the need and associated infrastructure costs for a stand-by farm for disaster recovery. Use Site
Recovery to replicate an entire farm (web, app, and database tiers) to Azure or to a secondary site.
Simplifies application deployment and management. Updates deployed to the primary site are automatically
replicated. The updates are available after failover and recovery of a farm in a secondary site. Lowers the
management complexity and costs associated with keeping a stand-by farm up to date.
Simplifies SharePoint application development and testing by creating a production-like copy on-demand
replica environment for testing and debugging.
Simplifies transition to the cloud by using Site Recovery to migrate SharePoint deployments to Azure.
Learn more about disaster recovery for SharePoint.
Protect Dynamics AX
Azure Site Recovery helps protect your Dynamics AX ERP solution, by:
Managing replication of your entire Dynamics AX environment (Web and AOS tiers, database tiers, SharePoint)
to Azure, or to a secondary site.
Simplifying migration of Dynamics AX deployments to the cloud (Azure).
Simplifying Dynamics AX application development and testing by creating a production-like copy on-demand,
for testing and debugging.
Learn more about disaster recovery for Dynamic AX.
Protect Exchange
Site Recovery helps protect Exchange, as follows:
For small Exchange deployments, such as a single or standalone server, Site Recovery can replicate and fail over
to Azure or to a secondary site.
For larger deployments, Site Recovery integrates with Exchange DAGS.
Exchange DAGs are the recommended solution for Exchange disaster recovery in an enterprise. Site Recovery
recovery plans can include DAGs, to orchestrate DAG failover across sites.
To learn more about disaster recovery for Exchange, see Exchange DAGs and Exchange disaster recovery.
Protect SAP
Use Site Recovery to protect your SAP deployment, as follows:
Enable protection of SAP NetWeaver and non-NetWeaver Production applications running on-premises, by
replicating components to Azure.
Enable protection of SAP NetWeaver and non-NetWeaver Production applications running Azure, by replicating
components to another Azure datacenter.
Simplify cloud migration, by using Site Recovery to migrate your SAP deployment to Azure.
Simplify SAP project upgrades, testing, and prototyping, by creating a production clone on-demand for testing
SAP applications.
Learn more about disaster recovery for SAP.
Next steps
Learn more about disaster recovery for an Azure VM.
Azure proximity placement groups for optimal
network latency with SAP applications
12/22/2020 • 11 minutes to read • Edit Online
SAP applications based on the SAP NetWeaver or SAP S/4HANA architecture are sensitive to network latency
between the SAP application tier and the SAP database tier. This sensitivity is the result of most of the business
logic running in the application layer. Because the SAP application layer runs the business logic, it issues queries
to the database tier at a high frequency, at a rate of thousands or tens of thousands per second. In most cases,
the nature of these queries is simple. They can often be run on the database tier in 500 microseconds or less.
The time spent on the network to send such a query from the application tier to the database tier and receive
the result set back has a major impact on the time it takes to run business processes. This sensitivity to network
latency is why you might want to achieve certain maximum network latency in SAP deployment projects. See
SAP Note #1100926 - FAQ: Network performance for guidelines on how to classify the network latency.
In many Azure regions, the number of datacenters has grown. At the same time, customers, especially for high-
end SAP systems, are using more special VM SKUs of the M or Mv2 family, or HANA Large Instances. These
Azure virtual machine types aren't always available in all the datacenters that complement an Azure region.
These facts can create opportunities to optimize network latency between the SAP application layer and the
SAP DBMS layer.
To give you a possibility to optimize network latency, Azure offers proximity placement groups. Proximity
placement groups can be used to force grouping of different VM types into a single Azure datacenter to
optimize the network latency between these different VM types to the best possible. In the process of deploying
the first VM into such a proximity placement group, the VM gets bound to a specific datacenter. As appealing as
this prospect sounds, the usage of the construct introduces some restrictions as well:
You cannot assume that all Azure VM types are available in every and all Azure datacenters. As a result, the
combination of different VM types within one proximity placement group can be restricted. These
restrictions occur because the host hardware that’s needed to run a certain VM type might not be present in
the datacenter to which the placement group was deployed
As you resize parts of the VMs that are within one proximity placement group, you cannot automatically
assume that in all cases the new VM type is available in the same datacenter as the other VMs that are part
of the proximity placement group
As Azure decommissions hardware it might force certain VMs of a proximity placement group into another
Azure datacenter. For details covering this case, read the document Co-locate resources for improved
latency
IMPORTANT
As a result of the potential restrictions, proximity placement groups should be used:
Only when necessary
Only on granularity of a single SAP system and not for a whole system landscape or a complete SAP landscape
In a way to keep the different VM types and the number of VMs within a proximity placement group to a minimum
NOTE
If there is no host hardware deployed that could run a specific VM type in the datacenter where the first VM was placed,
the deployment of the requested VM type won’t succeed. You’ll get a failure message.
A single Azure resource group can have multiple proximity placement groups assigned to it. But a proximity
placement group can be assigned to only one Azure resource group.
Proximity placement groups with SAP systems that use only Azure
VMs
Most SAP NetWeaver and S/4HANA system deployments on Azure don't use HANA Large Instances. For
deployments that don't use HANA Large Instances, it's important to provide optimal performance between the
SAP application layer and the DBMS tier. To do so, define an Azure proximity placement group just for the
system.
In most customer deployments, customers build a single Azure resource group for SAP systems. In that case,
there's a one-to-one relationship between, for example, the production ERP system resource group and its
proximity placement group. In other cases, customers organize their resource groups horizontally and collect
all production systems in a single resource group. In this case, you'd have a one-to-many relationship between
your resource group for production SAP systems and several proximity placement groups for your production
SAP ERP, SAP BW, and so on.
Avoid bundling several SAP production or non-production systems in a single proximity placement group.
When a small number of SAP systems or an SAP system and some surrounding applications need to have low
latency network communication, you might consider moving these systems into one proximity placement
group. Avoid bundles of systems because the more systems you group in a proximity placement group, the
higher the chances:
That you require a VM type that can't be run in the specific datacenter into which the proximity placement
group was scoped to.
That resources of non-mainstream VMs, like M-Series VMs, could eventually be unfulfilled when you need
more because you're adding software to a proximity placement group over time.
Here's what the ideal configuration, as described, looks like:
In this case, single SAP systems are grouped in one resource group each, with one proximity placement group
each. There's no dependency on whether you use HANA scale-out or DBMS scale-up configurations.
Get-AzureRmContext
If you need to change to a different subscription, you can do so by running this command:
Deploy your first VM into the proximity placement group by using a command like this one:
The preceding command deploys a Windows-based VM. After this VM deployment succeeds, the datacenter
scope of the proximity placement group is defined within the Azure region. All subsequent VM deployments
that reference the proximity placement group, as shown in the preceding command, will be deployed in the
same Azure datacenter, as long as the VM type can be hosted on hardware placed in that datacenter, and
capacity for that VM type is available.
A successful deployment of this virtual machine would host the database instance of the SAP system in one
Availability Zone. The scope of the proximity placement group is fixed to one of the datacenters that represent
the Availability Zone you defined.
Assume you deploy the Central Services VMs in the same way as the DBMS VMs, referencing the same zone or
zones and the same proximity placement groups. In the next step, you need to create the availability sets you
want to use for the application layer of your SAP system.
Define and create the proximity placement group. The command for creating the availability set requires an
additional reference to the proximity placement group ID (not the name). You can get the ID of the proximity
placement group by using this command:
Get-AzProximityPlacementGroup -ResourceGroupName "myfirstppgexercise" -Name "letsgetclose"
When you create the availability set, you need to consider additional parameters when you're using managed
disks (default unless specified otherwise) and proximity placement groups:
Ideally, you should use three fault domains. But the number of supported fault domains can vary from region
to region. In this case, the maximum number of fault domains possible for the specific regions is two. To deploy
your application layer VMs, you need to add a reference to your availability set name and the proximity
placement group name, as shown here:
NOTE
Because you deploy one DBMS VM into one zone and the second DBMS VM into another zone to create a high
availability configuration, you'll need a different proximity placement group for each of the zones. The same is true for
any availability set that you use.
Next steps
Check out the documentation:
SAP workloads on Azure: planning and deployment checklist
Preview: Deploy VMs to proximity placement groups using Azure CLI
Preview: Deploy VMs to proximity placement groups using PowerShell
Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
SAP BusinessObjects BI platform planning and
implementation guide on Azure
12/22/2020 • 17 minutes to read • Edit Online
Overview
The purpose of this guide is to provide guidelines for planning, deploying, and configuring SAP BusinessObjects BI
Platform, also known as SAP BOBI Platform on Azure. This guide is intended to cover common Azure services and
features that are relevant for SAP BOBI Platform. This guide isn't an exhaustive list of all possible configuration
options. It covers solutions common to typical deployment scenarios.
This guide isn't intended to replace the standard SAP BOBI Platform installation and administration guides,
operating system, or any database documentation.
Architecture details
Load balancer
In SAP BOBI multi-instance deployment, Web application servers (or web tier) are running on two or more
hosts. To distribute user load evenly across web servers, you can use a load balancer between end users and
web servers. In Azure, you can either use Azure Load Balancer or Azure Application Gateway to manage
traffic to your web servers.
Web application servers
The web server hosts the web applications of SAP BOBI Platform like CMC and BI Launch Pad. To achieve
high availability for web server, you must deploy at least two web application servers to manage
redundancy and load balancing. In Azure, these web application servers can be placed either in availability
sets or availability zones for better availability.
Tomcat is the default web application for SAP BI Platform. To achieve high availability for tomcat, enable
session replication using Static Membership Interceptor in Azure. It ensures that user can access SAP BI web
application even when tomcat service is disrupted.
IMPORTANT
By default Tomcat uses multicast IP and Port for clustering which is not supported on Azure (SAP Note 2764907).
BI platform servers
BI Platform servers include all the services that are part of SAP BOBI application (management tier,
processing tier, and storage tier). When a web server receives a request, it detects each BI platform server
(specifically, all CMS servers in a cluster) and automatically load balance their requests. In case if one of the
BI Platform hosts fails, web server automatically send requests to other host.
To achieve high availability or redundancy for BI Platform, you must deploy the application in at least two
Azure virtual machines. Based on the sizing, you can scale your BI Platform to run on more Azure virtual
machines.
File repository server (FRS)
File Repository Server contains all reports and other BI documents that have been created. In multi-instance
deployment, BI Platform servers are running on multiple virtual machines and each VM should have access
to these reports and other BI documents. So, a filesystem needs to be shared across all BI platform servers.
In Azure, you can either use Azure Premium Files or Azure NetApp Files for File Repository Server. Both of
these Azure services have built-in redundancy.
IMPORTANT
SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For
more information, see NFS 4.1 support for Azure Files is now in preview
Support matrix
This section describes supportability of different SAP BOBI component like SAP BusinessObjects BI Platform
version, Operating System and, Databases in Azure.
SAP BusinessObjects BI platform
Azure Infrastructure as a Service (IaaS) enables you to deploy and configure SAP BusinessObjects BI Platform on
Azure Compute. It supports following version of SAP BOBI Platform -
SAP BusinessObjects BI Platform 4.3
SAP BusinessObjects BI Platform 4.2 SP04+
SAP BusinessObjects BI Platform 4.1 SP05+
The SAP BI Platform runs on different operating system and databases. Supportability of SAP BOBI platform
between Operating System and Database version can be found in Product Availability Matrix for SAP BOBI.
Operating system
Azure supports following operating systems for SAP BusinessObjects BI Platform deployment.
Microsoft Windows Server
SUSE Linux Enterprise Server (SLES)
Red Hat Enterprise Linux (RHEL)
Oracle Linux (OL)
The operating system version that is listed in Product Availability Matrix (PAM) for SAP BusinessObjects BI Platform
are supported as long as they're compatible to run on Azure Infrastructure.
Databases
The BI Platform needs database for CMS and Auditing Data store, which can be installed on any supported
databases that are listed in SAP Product Availability Matrix that includes the following -
Microsoft SQL Server
Azure SQL Database (Supported database only for SAP BOBI Platform on Windows)
It's a fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server.
Azure SQL database handles most of the database management functions such as upgrading, patching, and
monitoring without user involvement. With Azure SQL Database, you can create a highly available and high-
performance data storage layer for the applications and solutions in Azure. For more details, check Azure
SQL Database documentation.
Azure Database for MySQL (Follow same compatibility guidelines as mentioned for MySQL AB in SAP PAM)
It's a relational database service powered by the MySQL community edition. Being a fully managed
Database-as-a-Service (DBaaS) offering, it can handle mission-critical workloads with predictable
performance and dynamic scalability. It has built-in high availability, automatic backups, software patching,
automatic failure detection, and point-in-time restore for up to 35 days, which substantially reduce
operation tasks. For more details, check Azure Database for MySQL documentation.
SAP HANA
SAP ASE
IBM DB2
Oracle (For version and restriction, check SAP Note 2039619)
MaxDB
This document illustrates the guidelines to deploy SAP BOBI Platform on Windows with Azure SQL
Database and SAP BOBI Platform on Linux with Azure Database for MySQL . It's also our recommended
approach for running SAP BusinessObjects BI Platform on Azure.
Sizing
Sizing is a process of determining the hardware requirement to run the application efficiently. For SAP BOBI
Platform, sizing needs to be done using SAP sizing tool called Quick Sizer. The tool provides the SAPS based on the
input, which then needs to be mapped to certified Azure virtual machines types for SAP. SAP Note 1928533
provides the list of supported SAP products and Azure VM types along with SAPS. For more information on sizing,
check SAP BI Sizing Guide.
For storage need for SAP BOBI Platform, Azure offers different types of Managed Disks. For SAP BOBI Installation
directory, it's recommended to use premium managed disk and for the database that runs on virtual machines,
follow the guidance that is provided in DBMS deployment for SAP workload.
Azure supports two DBaaS offering for SAP BOBI Platform data tier - Azure SQL Database (BI Application running
on Windows) and Azure Database for MySQL (BI Application running on Linux and Windows). So based on the
sizing result, you can choose purchasing model that best fits your need.
TIP
For quick sizing reference, consider 800 SAPS = 1 vCPU while mapping the SAPS result of SAP BOBI Platform database tier to
Azure Database-as-a-Service (Azure SQL Database or Azure Database for MySQL).
NOTE
For SAP BOBI, it's convenient to use vCore based model and choose either General Purpose or Business Critical service tier
based on the business need.
NOTE
For SAP BOBI, it is convenient to use General Purpose or Memory Optimized pricing tier based on the business workload.
Azure resources
Choosing regions
Azure region is one or a collection of data-centers that contains the infrastructure to run and hosts different Azure
Services. This infrastructure includes large number of nodes that function as compute nodes or storage nodes, or
run network functionality. Not all region offers the same services.
SAP BI Platform contains different components that might require specific VM types, Storage like Azure Files or
Azure NetApp Files or Database as a Service (DBaaS) for its data tier that might not be available in certain regions.
You can find out the exact information on VM types, Azure Storage types or, other Azure Services in Products
available by region site. If you're already running your SAP systems on Azure, probably you have your region
identified. In that case, you need to first investigate that the necessary services are available in those regions to
decide the architecture of SAP BI Platform.
Availability zones
Availability Zones are physically separate locations within an Azure region. Each Availability Zone is made of one or
more datacenters equipped with independent power, cooling, and networking.
To achieve high availability on each tier for SAP BI Platform, you can distribute VMs across Availability Zone by
implementing high availability framework, which can provide the best SLA in Azure. For Virtual Machine SLA in
Azure, check the latest version of Virtual Machine SLAs.
For data tier, Azure Database as a Service (DBaaS) service provides high availability framework by default. You just
need to select the region and service inherent high availability, redundancy, and resiliency capabilities to mitigate
database downtime from planned and unplanned outages, without requiring you to configure any additional
components. For more details on the SLA for supported DBaaS offering on Azure, check High availability in Azure
Database for MySQL and High availability for Azure SQL Database.
Availability sets
Availability set is a logical grouping capability for isolating Virtual Machine (VM) resources from each other on
being deployed. Azure makes sure of the VMs you place within an Availability Set run across multiple physical
servers, compute racks, storage units, and network switches. If a hardware or software failure happens, only a
subset of your VMs is affected and your overall solution stays operational. So when virtual machines are placed in
availability sets, Azure Fabric Controller distributes the VMs over different Fault and Upgrade domains to prevent
all VMs from being inaccessible because of infrastructure maintenance or failure within one Fault domain.
SAP BI Platform contains many different components and while designing the architecture you have to make sure
that each of this component is resilient of any disruption. It can be achieved by placing Azure virtual machines of
each component within availability sets. Keep in mind, when you mix VMs of different VM families within one
availability set, you may come across problems that prevent you to include a certain VM type into such availability
set. So have separate availability set for Web Application, BI Application for SAP BI Platform as highlighted in
Architecture Overview.
Also the number of update and fault domains that can be used by an Azure Availability Set within an Azure Scale
unit is finite. So if you keep adding VMs to a single availability set, two or more VMs will eventually end in the same
fault or update domain. For more information, see the Azure Availability Sets section of the Azure virtual machines
planning and implementation for SAP document.
To understand the concept of Azure availability sets and the way availability sets relate to Fault and Upgrade
Domains, read manage availability article.
IMPORTANT
The concepts of Azure Availability Zones and Azure availability sets are mutually exclusive. That means, you can either deploy
a pair or multiple VMs into a specific Availability Zone or an Azure availability set. But not both.
Virtual machines
Azure Virtual Machine is a service offering that enables you to deploy custom images to Azure as Infrastructure-as-
a-Service (IaaS) instances. It simplifies maintaining and operating applications by providing on-demand compute
and storage to host, scale, and manage web application and connected applications.
Azure offers varieties of virtual machines for all your application needs. But for SAP workload, Azure has narrowed
the selection to different VM families that are suitable for SAP workload and SAP HANA workload more specifically.
For more insight, check What SAP software is supported for Azure deployments.
Based on the SAP BI Platform sizing, you need to map your requirement to Azure Virtual Machine, which is
supported in Azure for SAP product. SAP Note 1928533 is a good starting point that list out supported Azure VM
types for SAP Products on Windows and Linux. Also a point to keep in mind that beyond the selection of purely
supported VM types, you also need to check whether those VM types are available in specific region. You can check
the availability of VM type on Products available by region page. For choosing the pricing model, you can refer to
Azure virtual machines for SAP workload
Storage
Azure Storage is an Azure-managed cloud service that provides storage that is highly available, secure, durable,
scalable, and redundant. Some of the storage types have limited use for SAP scenarios. But several Azure Storage
types are well suited or optimized for specific SAP workload scenarios. For more information, refer Azure Storage
types for SAP Workload guide, as it highlights different storage options that are suited for SAP.
Azure Storage has different Storage types available for customers and details for the same can be read in the article
What disk types are available in Azure?. SAP BOBI Platform uses following Azure Storage to build the application -
Azure-managed disks
It's a block-level storage volume that is managed by Azure. You can use the disks for SAP BOBI Platform
application servers and databases, when installed on Azure virtual machines. There are different types of
Azure Managed Disks available, but it's recommended to use Premium SSDs for SAP BOBI Platform
application and database.
In below example, Premium SSDs are used for BOBI Platform installation directory. For database installed on
virtual machine, you can use managed disks for data and log volume as per the guidelines. CMS and Audit
databases are typically small and it doesn’t have the same storage performance requirements as that of
other SAP OLTP/OLAP databases.
Azure Premium Files or Azure NetApp Files
In SAP BOBI Platform, File Repository Server (FRS) refers to the disk directories where contents like reports,
universes, and connections are stored which are used by all application servers of that system. Azure
Premium Files or Azure NetApp Files storage can be used as a shared file system for SAP BOBI applications
FRS. As this storage offering is not available all regions, refer to Products available by region site to find out
up-to-date information.
If the service is unavailable in your region, you can create NFS server from which you can share the file
system to SAP BOBI application. But you'll also need to consider its high availability.
Networking
SAP BOBI is a reporting and analytics BI platform that doesn’t hold any business data. So the system is connected
to other database servers from where it fetches all the data and provide insight to users. Azure provides a network
infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting
to on-premise system, systems in different virtual network and others. For more information check Microsoft
Azure Networking for SAP Workload.
For Database-as-a-Service offering, any newly created database (Azure SQL Database or Azure Database for
MySQL) has a firewall that blocks all external connections. To allow access to the DBaaS service from BI Platform
virtual machines, you need to specify one or more server-level firewall rules to enable access to your DBaaS server.
For more information, see Firewall rules for Azure Database for MySQL and Network Access Controls section for
Azure SQL database.
Next steps
SAP BusinessObjects BI Platform Deployment on Linux
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP BusinessObjects BI platform deployment guide
for linux on Azure
12/22/2020 • 31 minutes to read • Edit Online
This article describes the strategy to deploy SAP BOBI Platform on Azure for Linux. In this example, two virtual
machines with Premium SSD Managed Disks as its install directory are configured. Azure Database for MySQL is
used for CMS database, and Azure NetApp Files for File Repository Server is shared across both servers. The
default Tomcat Java web application and BI Platform application are installed together on both virtual machines. To
load balance the user request, Application Gateway is used that has native TLS/SSL offloading capabilities.
This type of architecture is effective for small deployment or non-production environment. For Production or
large-scale deployment, you can have separate hosts for Web Application and can as well have multiple BOBI
applications hosts allowing server to process more information.
In this example, below product version and file system layout is used
SAP BusinessObjects Platform 4.3
SUSE Linux Enterprise Server 12 SP5
Azure Database for MySQL (Version: 8.0.15)
MySQL C API Connector - libmysqlclient (Version: 6.1.11)
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 1G 0 part /boot
└─sda4 8:4 0 28.5G 0 part /
sdb 8:16 0 32G 0 disk
└─sdb1 8:17 0 32G 0 part /mnt
sdc 8:32 0 128G 0 disk
sr0 11:0 1 628K 0 rom
# Premium SSD of 128 GB is attached to Virtual Machine, whose device name is sdc
sudo blkid
#It will display information about block device. Copy UUID of the formatted block device
sudo df -h
2. [A] Configure Client OS to support NFSv4.1 Mount (Only applicable if using NFSv4.1)
If you're using Azure NetApp Files volumes with NFSv4.1 protocol, execute following configuration on all
VMs, where Azure NetApp Files NFSv4.1 volumes need to be mounted.
Verify NFS domain settings
Make sure that the domain is configured as the default Azure NetApp Files domain that is,
defaultv4iddomain.com and the mapping is set to nobody .
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody.
If using NFSv4.1
sudo mount -a
sudo df -h
NOTE
Changing the Backup Redundancy Options after server creation is not supported.
3. In SQL query tab, run below query to create schema for CMS and Audit database.
# Here cmsbl1 is the database name of CMS database. You can provide the name you want for CMS database.
CREATE SCHEMA `cmsbl1` DEFAULT CHARACTER SET utf8;
# auditbl1 is the database name of Audit database. You can provide the name you want for CMS database.
CREATE SCHEMA `auditbl1` DEFAULT CHARACTER SET utf8;
# Create a user that can connect from any host, use the '%' wildcard as a host part
CREATE USER 'cmsadmin'@'%' IDENTIFIED BY 'password';
CREATE USER 'auditadmin'@'%' IDENTIFIED BY 'password';
# Following any updates to the user privileges, be sure to save the changes by issuing the FLUSH
PRIVILEGES
FLUSH PRIVILEGES;
USE sys;
SHOW GRANTS for 'cmsadmin'@'%';
+------------------------------------------------------------------------+
| Grants for cmsadmin@% |
+------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `cmsadmin`@`%` |
| GRANT ALL PRIVILEGES ON `cmsbl1`.* TO `cmsadmin`@`%` WITH GRANT OPTION |
+------------------------------------------------------------------------+
USE sys;
SHOW GRANTS FOR 'auditadmin'@'%';
+----------------------------------------------------------------------------+
| Grants for auditadmin@% |
+----------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `auditadmin`@`%` |
| GRANT ALL PRIVILEGES ON `auditbl1`.* TO `auditadmin`@`%` WITH GRANT OPTION |
+----------------------------------------------------------------------------+
# sample output
libmysqlclient: /usr/lib64/libmysqlclient.so
6. Set LD_LIBRARY_PATH to point to /usr/lib64 directory for user account that will be used for installation.
# This configuration is for bash shell. If you are using any other shell for sidadm, kindly set
environment variable accordingly.
vi /home/bl1adm/.bashrc
export LD_LIBRARY_PATH=/usr/lib64
Server Preparation
The steps in this section use the following prefixes:
[A] : The step applies to all hosts.
1. [A] Based on the flavor of Linux (SLES or RHEL), you need to set kernel parameters and install required
libraries. Refer to System requirements section in Business Intelligence Platform Installation Guide for
Unix.
2. [A] Ensure the time zone on your machine is set correctly. Refer to Additional Unix and Linux requirements
section in Installation Guide.
3. [A] Create user account (bl1 adm) and group (sapsys) under which the software's background processes can
run. Use this account to execute the installation and run the software. The account doesn't require root
privileges.
4. [A] Set user account (bl1 adm) environment to use a supported UTF-8 locale and ensure that your console
software supports UTF-8 character sets. To ensure that your operating system uses the correct locale, set the
LC_ALL and LANG environment variables to your preferred locale in your (bl1 adm) user environment.
# This configuration is for bash shell. If you are using any other shell for sidadm, kindly set
environment variable accordingly.
vi /home/bl1adm/.bashrc
export LANG=en_US.utf8
export LC_ALL=en_US.utf8
root@azusbosl1:~> su - bl1adm
bl1adm@azusbosl1:~> ulimit -a
6. Download and extract media for SAP BusinessObjects BI Platform from SAP Service Marketplace.
Installation
Check locale for user account bl1 adm on the server
bl1adm@azusbosl1:~> locale
LANG=en_US.utf8
LC_ALL=en_US.utf8
Navigate to media of SAP BusinessObjects BI Platform and run below command with bl1 adm user -
Follow SAP BOBI Platform Installation Guide for Unix, specific to your version. Few points to note while installing
SAP BOBI Platform.
On Configure Product Registration screen, you can either use temporary license key for SAP
BusinessObjects Solutions from SAP Note 1288121 or can generate license key in SAP Service Marketplace
On Select Install Type screen, select Full installation on first server (azusbosl1), for other server
(azusbosl2) select Custom / Expand which will expand the existing BOBI setup.
On Select Default or Existing Database screen, select configure an existing database , which will
prompt you to select CMS and Audit database. Select MySQL for CMS Database type and Audit Database
type.
You can also select No auditing database, if you don’t want to configure auditing during installation.
Select appropriate options on Select Java Web Application Ser ver screen based on your SAP BOBI
architecture. In this example, we have selected option 1, which installs tomcat server on the same SAP BOBI
Platform.
Enter CMS database information in Configure CMS Repositor y Database - MySQL . Example input for
CMS database information for Linux installation. Azure Database for MySQL is used on default port 3306
(Optional) Enter Audit database information in Configure Audit Repositor y Database - MySQL .
Example input for Audit database information for Linux installation.
Follow the instructions and enter required inputs to complete the installation.
For multi-instance deployment, run the installation setup on second host (azusbosl2). During Select Install Type
screen, select Custom / Expand which will expand the existing BOBI setup.
In Azure database for MySQL offering, a gateway is used to redirect the connections to server instances. After the
connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual
version running on your MySQL server instance. To determine the version of your MySQL server instance, use the
SELECT VERSION(); command at the MySQL prompt. So in Central Management Console (CMC), you'll find
different database version that is basically the version set on gateway. Check Supported Azure Database for
MySQL server versions for more details.
select version();
+-----------+
| version() |
+-----------+
| 8.0.15 |
+-----------+
Post installation
Tomcat clustering - session replication
Tomcat supports clustering of two or more application servers for session replication and failover. SAP BOBI
platform sessions are serialized, a user session can fail over seamlessly to another instance of tomcat, even when
an application server fails.
For example, if a user is connected to a web server that fails while the user is navigating a folder hierarchy in SAP
BI application. With a correctly configured cluster, the user may continue navigating the folder hierarchy without
being redirected to sign in page.
In SAP Note 2808640, steps to configure tomcat clustering is provided using multicast. But in Azure, multicast isn't
supported. So to make Tomcat cluster work in Azure, you must use StaticMembershipInterceptor (SAP Note
2764907). Check Tomcat Clustering using Static Membership for SAP BusinessObjects BI Platform on SAP blog to
set up tomcat cluster in Azure.
Load-balancing web tier of SAP BI platform
In SAP BOBI multi-instance deployment, Java Web Application servers (web tier) are running on two or more
hosts. To distribute user load evenly across web servers, you can use a load balancer between end users and web
servers. In Azure, you can either use Azure Load Balancer or Azure Application Gateway to manage traffic to your
web application servers. Details about each offering are explained in following section.
Azure load balancer (network-based load balancer)
Azure Load Balancer is a high performance, low latency layer 4 (TCP, UDP) load balancer that distributes traffic
among healthy Virtual Machines. A load balancer health probe monitors a given port on each VM and only
distributes traffic to an operational Virtual Machine(s). You can either choose a public load balancer or internal load
balancer depending on whether you want SAP BI Platform accessible from internet or not. Its zone redundant,
ensuring high-availability across Availability Zones.
Refer to Internal Load Balancer section in below figure where web application server runs on port 8080, default
Tomcat HTTP Port, which will be monitored by health probe. So any incoming request that comes from end users
will get redirected to the web application servers (azusbosl1 or azusbosl2) in the backend pool. Load balancer
doesn’t support TLS/SSL Termination (also known as TLS/SSL Offloading). If you are using Azure load balancer to
distribute traffic across web servers, we recommend using Standard Load Balancer.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
Azure application gateway (web application load balancer)
Azure Application Gateway (AGW) provide Application Delivery Controller (ADC) as a service, which is used to help
application to direct user traffic to one or more web application servers. It offers various layer 7 load-balancing
capabilities like TLS/SSL Offloading, Web Application Firewall (WAF), Cookie-based session affinity and others for
your applications.
In SAP BI Platform, application gateway directs application web traffic to the specified resources in a backend pool
- azusbosl1 or azusbos2. You assign a listener to port, create rules, and add resources to a backend pool. In below
figure, application gateway with private frontend IP address (10.31.3.20) act as entry point for the users, handles
incoming TLS/SSL (HTTPS - TCP/443) connections, decrypt the TLS/SSL and passing on the unencrypted request
(HTTP - TCP/8080) to the servers in the backend pool. With in-built TLS/SSL termination feature, we just need to
maintain one TLS/SSL certificate on application gateway, which simplifies operations.
To configure Application Gateway for SAP BOBI Web Server, you can refer to Load Balancing SAP BOBI Web
Servers using Azure Application Gateway on SAP blog.
NOTE
We recommend to use Azure Application Gateway to load balance the traffic to web server as it provide feature likes like SSL
offloading, Centralize SSL management to reduce encryption and decryption overhead on server, Round-Robin algorithm to
distribute traffic, Web Application Firewall (WAF) capabilities, high-availability and so on.
NOTE
SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For more
information, see NFS 4.1 support for Azure Files is now in preview
As this File share service isn't available in all region, make sure you refer to Products available by region site to find
out up-to-date information. If the service isn't available in your region, you can create NFS server from which you
can share the file system to SAP BOBI application. But you'll also need to consider its high availability.
High availability for load balancer
To distribute traffic across web server, you can either use Azure Load Balancer or Azure Application Gateway. The
redundancy for either of the load balancer can be achieved based on the SKU you choose for deployment.
For Azure Load Balancer, redundancy can be achieved by configuring Standard Load Balancer frontend as zone-
redundant. For more information, see Standard Load Balancer and Availability Zones
For Application Gateway, high availability can be achieved based on the type of tier selected during deployment.
v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure
distributes these instances across update and fault domains to ensure that instances don't all fail at the
same time. So with this SKU, redundancy can be achieved within the zone
v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If
you choose zone redundancy, the newest instances are also spread across availability zones to offer
zonal failure resiliency. For more details, refer Autoscaling and Zone-redundant Application Gateway v2
Reference high availability architecture for SAP BusinessObjects BI platform
Below reference architecture describe the setup of SAP BOBI Platform using availability set, which provides VMs
redundancy and availability within the zone. The architecture showcases the use of different Azure Services like
Azure Application Gateway, Azure NetApp Files, and Azure Database for MySQL for SAP BOBI Platform that offers
built-in redundancy, which reduces the complexity of managing different high availability solutions.
In below figure, the incoming traffic (HTTPS - TCP/443) is load balanced using Azure Application Gateway v1 SKU,
which is highly available when deployed on two or more instances. Multiple instances of web server, management
servers, and processing servers are deployed in separate Virtual Machines to achieve redundancy and each tier is
deployed in separate Availability Sets. Azure NetApp Files has built-in redundancy within data center, so your ANF
volumes for File Repository Server will be highly available. CMS Database is provisioned on Azure Database for
MySQL (DBaaS) which has inherent high availability. For more information, see High availability in Azure Database
for MySQL guide.
The above architecture provides insight on how SAP BOBI deployment on Azure can be done. But it doesn't cover
all possible configuration options for SAP BOBI Platform on Azure. Customer can tailor their deployment based on
their business requirement, by choosing different products/services for different components like Load Balancer,
File Repository Server, and DBMS.
In several Azure Regions, Availability Zones are offered which means it has independent supply of power source,
cooling, and network. It enables customer to deploy application across two or three availability zones. For
customer who wants to achieve high availability across AZs can deploy SAP BOBI Platform across availability
zones, making sure that each component in the application is zone redundant.
Disaster recovery
The instruction in this section explains the strategy to provide disaster recovery protection for SAP BOBI Platform.
It complements the Disaster Recovery for SAP document, which represents the primary resources for overall SAP
disaster recovery approach.
Reference disaster recovery architecture for SAP BusinessObjects BI platform
This reference architecture is running multi-instance deployment of SAP BOBI Platform with redundant application
servers. For disaster recovery, you should fail over all tier to a secondary region. Each tier uses a different strategy
to provide disaster recovery protection.
Load balancer
Load Balancer is used to distribute traffic across Web Application Servers of SAP BOBI Platform. To achieve DR for
Azure Application Gateway, implement parallel setup of application gateway on secondary region.
Virtual machines running web and BI application servers
Azure Site Recovery service can be used to replicate Virtual Machines running Web and BI Application Servers on
the secondary region. It replicates the servers on the secondary region so that when disasters and outages occur
you can easily fail over to your replicated environment and continue working
File repository servers
Azure NetApp Files provides NFS and SMB volumes, so any file-based copy tool can be used to replicate
data between Azure regions. For more information on how to copy ANF volume in another region, see FAQs
About Azure NetApp Files
You can use Azure NetApp Files Cross Region Replication, which is currently in preview that uses NetApp
SnapMirror® technology. So only changed blocks are sent over the network in a compressed, efficient
format. This proprietary technology minimizes the amount of data required to replicate across the regions,
which saves data transfer costs. It also shortens the replication time so you can achieve a smaller Restore
Point Objective (RPO). Refer to Requirements and considerations for using cross-region replication for more
information.
Azure premium files only support locally redundant (LRS) and zone redundant storage (ZRS). For Azure
Premium Files DR strategy, you can use AzCopy or Azure PowerShell to copy your files to another storage
account in a different region. For more information, see Disaster recovery and storage account failover
CMS database
Azure Database for MySQL provides multiple options to recover database if there are any disaster. Choose
appropriate option that works for your business.
Enable cross-region read replicas to enhance your business continuity and disaster recovery planning. You
can replicate from source server to up to five replicas. Read replicas are updated asynchronously using
MySQL's binary log replication technology. Replicas are new servers that you manage similar to regular
Azure Database for MySQL servers. Learn more about read replicas, available regions, restrictions and how
to fail over from the read replicas concepts article.
Use Azure Database for MySQL's geo-restore feature that restores the server using geo-redundant backups.
These backups are accessible even when the region on which your server is hosted is offline. You can
restore from these backups to any other region and bring your server back online.
NOTE
Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. Changing the
Backup Redundancy Options after server creation is not supported. For more information, see Backup
Redundancy article.
Following is the recommendation for disaster recovery of each tier used in this example.
Azure NetApp Files File based copy tool to replicate data to Secondary Region or
ANF Cross Region Replication (Preview)
SA P B O B I P L AT F O RM T IERS REC O M M EN DAT IO N
Azure Database for MySQL Cross region read replicas or Restore backup from geo-
redundant backups.
Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
Tutorial: Configure SAP SuccessFactors to Active
Directory user provisioning
12/22/2020 • 14 minutes to read • Edit Online
The objective of this tutorial is to show the steps you need to perform to provision users from SuccessFactors
Employee Central into Active Directory (AD) and Azure AD, with optional write-back of email address to
SuccessFactors.
NOTE
Use this tutorial if the users you want to provision from SuccessFactors need an on-premises AD account and optionally an
Azure AD account. If the users from SuccessFactors only need Azure AD account (cloud-only users), then please refer to the
tutorial on configure SAP SuccessFactors to Azure AD user provisioning.
Overview
The Azure Active Directory user provisioning service integrates with the SuccessFactors Employee Central in order
to manage the identity life cycle of users.
The SuccessFactors user provisioning workflows supported by the Azure AD user provisioning service enable
automation of the following human resources and identity lifecycle management scenarios:
Hiring new employees - When a new employee is added to SuccessFactors, a user account is
automatically created in Active Directory, Azure Active Directory, and optionally Microsoft 365 and other
SaaS applications supported by Azure AD, with write-back of the email address to SuccessFactors.
Employee attribute and profile updates - When an employee record is updated in SuccessFactors (such
as their name, title, or manager), their user account will be automatically updated in Active Directory, Azure
Active Directory, and optionally Microsoft 365 and other SaaS applications supported by Azure AD.
Employee terminations - When an employee is terminated in SuccessFactors, their user account is
automatically disabled in Active Directory, Azure Active Directory, and optionally Microsoft 365 and other
SaaS applications supported by Azure AD.
Employee rehires - When an employee is rehired in SuccessFactors, their old account can be automatically
reactivated or re-provisioned (depending on your preference) to Active Directory, Azure Active Directory,
and optionally Microsoft 365 and other SaaS applications supported by Azure AD.
Who is this user provisioning solution best suited for?
This SuccessFactors to Active Directory user provisioning solution is ideally suited for:
Organizations that desire a pre-built, cloud-based solution for SuccessFactors user provisioning
Organizations that require direct user provisioning from SuccessFactors to Active Directory
Organizations that require users to be provisioned using data obtained from the SuccessFactors Employee
Central (EC)
Organizations that require joining, moving, and leaving users to be synced to one or more Active Directory
Forests, Domains, and OUs based only on change information detected in SuccessFactors Employee Central
(EC)
Organizations using Microsoft 365 for email
Solution Architecture
This section describes the end-to-end user provisioning solution architecture for common hybrid environments.
There are two related flows:
Authoritative HR Data Flow – from SuccessFactors to on-premises Active Director y: In this flow
worker events (such as New Hires, Transfers, Terminations) first occur in the cloud SuccessFactors Employee
Central and then the event data flows into on-premises Active Directory through Azure AD and the
Provisioning Agent. Depending on the event, it may lead to create/update/enable/disable operations in AD.
Email Writeback Flow – from on-premises Active Director y to SuccessFactors: Once the account
creation is complete in Active Directory, it is synced with Azure AD through Azure AD Connect sync and
email attribute can be written back to SuccessFactors.
Add a Role Name and Description for the new permission role. The name and description should indicate
that the role is for API usage permissions.
Under Permission settings, click Permission..., then scroll down the permission list and click Manage
Integration Tools . Check the box for Allow Admin to Access to OData API through Basic
Authentication .
Scroll down in the same box and select Employee Central API . Add permissions as shown below to read
using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the
Writeback to SuccessFactors scenario.
NOTE
For the complete list of attributes retrieved by this provisioning app, please refer to SuccessFactors Attribute
Reference
Add a Group Name for the new group. The group name should indicate that the group is for API users.
Add members to the group. For example, you could select Username from the People Pool drop-down menu
and then enter the username of the API account that will be used for the integration.
TIP
You can check the version of the .NET framework on your server using the instructions provided here. If the server does not
have .NET 4.7.1 or higher installed, you can download it from here.
Transfer the downloaded agent installer to the server host and follow the steps given below to complete the agent
configuration.
1. Sign in to the Windows Server where you want to install the new agent.
2. Launch the Provisioning Agent installer, agree to the terms, and click on the Install button.
3. After installation is complete, the wizard will launch and you will see the Connect Azure AD screen. Click
on the Authenticate button to connect to your Azure AD instance.
5. After successful authentication with Azure AD, you will see the Connect Active Director y screen. In this
step, enter your AD domain name and click on the Add Director y button.
6. You will now be prompted to enter the credentials required to connect to the AD Domain. On the same
screen, you can use the Select domain controller priority to specify domain controllers that the agent
should use for sending provisioning requests.
7. After configuring the domain, the installer displays a list of configured domains. On this screen, you can
repeat step #5 and #6 to add more domains or click on Next to proceed to agent registration.
NOTE
If you have multiple AD domains (e.g. na.contoso.com, emea.contoso.com), then please add each domain individually
to the list. Only adding the parent domain (e.g. contoso.com) is not sufficient. You must register each child domain
with the agent.
8. Review the configuration details and click on Confirm to register the agent.
10. Once the agent registration is successful, you can click on Exit to exit the Wizard.
11. Verify the installation of the Agent and make sure it is running by opening the "Services" Snap-In and look
for the Service named "Microsoft Azure AD Connect Provisioning Agent"
Part 3: In the provisioning app, configure connectivity to SuccessFactors and Active Directory
In this step, we establish connectivity with SuccessFactors and Active Directory in the Azure portal.
1. In the Azure portal, go back to the SuccessFactors to Active Directory User Provisioning App created in Part
1
2. Complete the Admin Credentials section as follows:
Admin Username – Enter the username of the SuccessFactors API user account, with the company
ID appended. It has the format: username@companyID
Admin password – Enter the password of the SuccessFactors API user account.
Tenant URL – Enter the name of the SuccessFactors OData API services endpoint. Only enter the
host name of server without http or https. This value should look like: .successfactors.com .
Active Director y Forest - The "Name" of your Active Directory domain, as registered with the
agent. Use the dropdown to select the target domain for provisioning. This value is typically a string
like: contoso.com
Active Director y Container - Enter the container DN where the agent should create user accounts
by default. Example: OU=Users,DC=contoso,DC=com
NOTE
This setting only comes into play for user account creations if the parentDistinguishedName attribute is not
configured in the attribute mappings. This setting is not used for user search or update operations. The entire
domain sub tree falls in the scope of the search operation.
Notification Email – Enter your email address, and check the "send email if failure occurs"
checkbox.
NOTE
The Azure AD Provisioning Service sends email notification if the provisioning job goes into a quarantine state.
Click the Test Connection button. If the connection test succeeds, click the Save button at the top. If it
fails, double-check that the SuccessFactors credentials and the AD credentials configured on the agent
setup are valid.
Once the credentials are saved successfully, the Mappings section will display the default mapping
Synchronize SuccessFactors Users to On Premises Active Director y
Part 4: Configure attribute mappings
In this section, you will configure how user data flows from SuccessFactors to Active Directory.
1. On the Provisioning tab under Mappings , click Synchronize SuccessFactors Users to On Premises
Active Director y .
2. In the Source Object Scope field, you can select which sets of users in SuccessFactors should be in scope
for provisioning to AD, by defining a set of attribute-based filters. The default scope is "all users in
SuccessFactors". Example filters:
Example: Scope to users with personIdExternal between 1000000 and 2000000 (excluding 2000000)
Attribute: personIdExternal
Operator: REGEX Match
Value: (1[0-9][0-9][0-9][0-9][0-9][0-9])
Example: Only employees and not contingent workers
Attribute: EmployeeID
Operator: IS NOT NULL
TIP
When you are configuring the provisioning app for the first time, you will need to test and verify your attribute
mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the
scoping filters under Source Object Scope to test your mappings with a few test users from SuccessFactors. Once
you have verified that the mappings work, then you can either remove the filter or gradually expand it to include
more users.
Cau t i on
The default behavior of the provisioning engine is to disable/delete users that go out of scope. This may not
be desirable in your SuccessFactors to AD integration. To override this default behavior refer to the article
Skip deletion of user accounts that go out of scope
3. In the Target Object Actions field, you can globally filter what actions are performed on Active Directory.
Create and Update are most common.
4. In the Attribute mappings section, you can define how individual SuccessFactors attributes map to Active
Directory attributes.
NOTE
For the complete list of SuccessFactors attribute supported by the application, please refer to SuccessFactors Attribute
Reference
1. Click on an existing attribute mapping to update it, or click Add new mapping at the bottom of the screen
to add new mappings. An individual attribute mapping supports these properties:
Mapping Type
Direct – Writes the value of the SuccessFactors attribute to the AD attribute, with no changes
Constant - Write a static, constant string value to the AD attribute
Expression – Allows you to write a custom value to the AD attribute, based on one or more
SuccessFactors attributes. For more info, see this article on expressions.
Source attribute - The user attribute from SuccessFactors
Default value – Optional. If the source attribute has an empty value, the mapping will write this
value instead. Most common configuration is to leave this blank.
Target attribute – The user attribute in Active Directory.
Match objects using this attribute – Whether or not this mapping should be used to uniquely
identify users between SuccessFactors and Active Directory. This value is typically set on the Worker
ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active
Directory.
Matching precedence – Multiple matching attributes can be set. When there are multiple, they are
evaluated in the order defined by this field. As soon as a match is found, no further matching
attributes are evaluated.
Apply this mapping
Always – Apply this mapping on both user creation and update actions
Only during creation - Apply this mapping only on user creation actions
2. To save your mappings, click Save at the top of the Attribute-Mapping section.
Once your attribute mapping configuration is complete, you can now enable and launch the user provisioning
service.
TIP
By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are
errors in the mapping or SuccessFactors data issues, then the provisioning job might fail and go into the quarantine state. To
avoid this, as a best practice, we recommend configuring Source Object Scope filter and testing your attribute mappings
with a few test users before launching the full sync for all users. Once you have verified that the mappings work and are
giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
The objective of this tutorial is to show the steps you need to perform to provision worker data from
SuccessFactors Employee Central into Azure Active Directory, with optional write-back of email address to
SuccessFactors.
NOTE
Use this tutorial if the users you want to provision from SuccessFactors are cloud-only users who don't need an on-premises
AD account. If the users require only on-premises AD account or both AD and Azure AD account, then please refer to the
tutorial on configure SAP SuccessFactors to Active Directory user provisioning.
Overview
The Azure Active Directory user provisioning service integrates with the SuccessFactors Employee Central in order
to manage the identity life cycle of users.
The SuccessFactors user provisioning workflows supported by the Azure AD user provisioning service enable
automation of the following human resources and identity lifecycle management scenarios:
Hiring new employees - When a new employee is added to SuccessFactors, a user account is
automatically created in Azure Active Directory and optionally Microsoft 365 and other SaaS applications
supported by Azure AD, with write-back of the email address to SuccessFactors.
Employee attribute and profile updates - When an employee record is updated in SuccessFactors (such
as their name, title, or manager), their user account will be automatically updated Azure Active Directory and
optionally Microsoft 365 and other SaaS applications supported by Azure AD.
Employee terminations - When an employee is terminated in SuccessFactors, their user account is
automatically disabled in Azure Active Directory and optionally Microsoft 365 and other SaaS applications
supported by Azure AD.
Employee rehires - When an employee is rehired in SuccessFactors, their old account can be automatically
reactivated or re-provisioned (depending on your preference) to Azure Active Directory and optionally
Microsoft 365 and other SaaS applications supported by Azure AD.
Who is this user provisioning solution best suited for?
This SuccessFactors to Azure Active Directory user provisioning solution is ideally suited for:
Organizations that desire a pre-built, cloud-based solution for SuccessFactors user provisioning
Organizations that require direct user provisioning from SuccessFactors to Azure Active Directory
Organizations that require users to be provisioned using data obtained from the SuccessFactors Employee
Central (EC)
Organizations using Microsoft 365 for email
Solution Architecture
This section describes the end-to-end user provisioning solution architecture for cloud-only users. There are two
related flows:
Authoritative HR Data Flow – from SuccessFactors to Azure Active Director y: In this flow worker
events (such as New Hires, Transfers, Terminations) first occur in the cloud SuccessFactors Employee Central
and then the event data flows into Azure Active Directory. Depending on the event, it may lead to
create/update/enable/disable operations in Azure AD.
Email Writeback Flow – from on-premises Active Director y to SuccessFactors: Once the account
creation is complete in Azure Active Directory, the email attribute value or UPN generated in Azure AD can
be written back to SuccessFactors.
Add a Role Name and Description for the new permission role. The name and description should indicate
that the role is for API usage permissions.
Under Permission settings, click Permission..., then scroll down the permission list and click Manage
Integration Tools . Check the box for Allow Admin to Access to OData API through Basic
Authentication .
Scroll down in the same box and select Employee Central API . Add permissions as shown below to read
using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the
Writeback to SuccessFactors scenario.
Add a Group Name for the new group. The group name should indicate that the group is for API users.
Add members to the group. For example, you could select Username from the People Pool drop-down menu
and then enter the username of the API account that will be used for the integration.
NOTE
The Azure AD Provisioning Service sends email notification if the provisioning job goes into a quarantine state.
Click the Test Connection button. If the connection test succeeds, click the Save button at the top. If it
fails, double-check that the SuccessFactors credentials and URL are valid.
Once the credentials are saved successfully, the Mappings section will display the default mapping
Synchronize SuccessFactors Users to Azure Active Director y
Part 2: Configure attribute mappings
In this section, you will configure how user data flows from SuccessFactors to Active Directory.
1. On the Provisioning tab under Mappings , click Synchronize SuccessFactors Users to Azure Active
Director y .
2. In the Source Object Scope field, you can select which sets of users in SuccessFactors should be in scope
for provisioning to Azure AD, by defining a set of attribute-based filters. The default scope is "all users in
SuccessFactors". Example filters:
Example: Scope to users with personIdExternal between 1000000 and 2000000 (excluding 2000000)
Attribute: personIdExternal
Operator: REGEX Match
Value: (1[0-9][0-9][0-9][0-9][0-9][0-9])
Example: Only employees and not contingent workers
Attribute: EmployeeID
Operator: IS NOT NULL
TIP
When you are configuring the provisioning app for the first time, you will need to test and verify your attribute
mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the
scoping filters under Source Object Scope to test your mappings with a few test users from SuccessFactors. Once
you have verified that the mappings work, then you can either remove the filter or gradually expand it to include
more users.
Cau t i on
The default behavior of the provisioning engine is to disable/delete users that go out of scope. This may not
be desirable in your SuccessFactors to Azure AD integration. To override this default behavior refer to the
article Skip deletion of user accounts that go out of scope
3. In the Target Object Actions field, you can globally filter what actions are performed on Active Directory.
Create and Update are most common.
4. In the Attribute mappings section, you can define how individual SuccessFactors attributes map to Active
Directory attributes.
NOTE
For the complete list of SuccessFactors attribute supported by the application, please refer to SuccessFactors Attribute
Reference
1. Click on an existing attribute mapping to update it, or click Add new mapping at the bottom of the screen
to add new mappings. An individual attribute mapping supports these properties:
Mapping Type
Direct – Writes the value of the SuccessFactors attribute to the AD attribute, with no changes
Constant - Write a static, constant string value to the AD attribute
Expression – Allows you to write a custom value to the AD attribute, based on one or more
SuccessFactors attributes. For more info, see this article on expressions.
Source attribute - The user attribute from SuccessFactors
Default value – Optional. If the source attribute has an empty value, the mapping will write this
value instead. Most common configuration is to leave this blank.
Target attribute – The user attribute in Active Directory.
Match objects using this attribute – Whether or not this mapping should be used to uniquely
identify users between SuccessFactors and Active Directory. This value is typically set on the Worker
ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active
Directory.
Matching precedence – Multiple matching attributes can be set. When there are multiple, they are
evaluated in the order defined by this field. As soon as a match is found, no further matching
attributes are evaluated.
Apply this mapping
Always – Apply this mapping on both user creation and update actions
Only during creation - Apply this mapping only on user creation actions
2. To save your mappings, click Save at the top of the Attribute-Mapping section.
Once your attribute mapping configuration is complete, you can now enable and launch the user provisioning
service.
TIP
By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are
errors in the mapping or Workday data issues, then the provisioning job might fail and go into the quarantine state. To avoid
this, as a best practice, we recommend configuring Source Object Scope filter and testing your attribute mappings with a
few test users before launching the full sync for all users. Once you have verified that the mappings work and are giving you
the desired results, then you can either remove the filter or gradually expand it to include more users.
The objective of this tutorial is to show the steps to write-back attributes from Azure AD to SAP SuccessFactors
Employee Central.
Overview
You can configure the SAP SuccessFactors Writeback app to write specific attributes from Azure Active Directory to
SAP SuccessFactors Employee Central. The SuccessFactors writeback provisioning app supports assigning values
to the following Employee Central attributes:
Work Email
Username
Business phone number (including country code, area code, number, and extension)
Business phone number primary flag
Cell phone number (including country code, area code, number)
Cell phone primary flag
User custom01-custom15 attributes
loginMethod attribute
NOTE
This app does not have any dependency on the SuccessFactors inbound user provisioning integration apps. You can
configure it independent of SuccessFactors to on-premises AD provisioning app or SuccessFactors to Azure AD provisioning
app.
4. Add a Role Name and Description for the new permission role. The name and description should indicate
that the role is for API usage permissions.
5. Under Permission settings, click Permission..., then scroll down the permission list and click Manage
Integration Tools . Check the box for Allow Admin to Access to OData API through Basic
Authentication .
6. Scroll down in the same box and select Employee Central API . Add permissions as shown below to read
using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for
the write-back to SuccessFactors scenario.
3. Add a Group Name for the new group. The group name should indicate that the group is for API users.
4. Add members to the group. For example, you could select Username from the People Pool drop-down
menu and then enter the username of the API account that will be used for the integration.
NOTE
Please involve your SuccessFactors Admin to complete the steps in this section.
Identify Email and Phone Number picklist names
In SAP SuccessFactors, a picklist is a configurable set of options from which a user can make a selection. The
different types of email and phone number (e.g. business, personal, other) are represented using a picklist. In this
step, we will identify the picklists configured in your SuccessFactors tenant to store email and phone number
values.
1. In SuccessFactors Admin Center, search for Manage business configuration.
2. Under HRIS Elements , select emailInfo and click on the Details for the email-type field.
3. On the email-type details page, note down the name of the picklist associated with this field. By default, it
is ecEmailType . However it may be different in your tenant.
4. Under HRIS Elements , select phoneInfo and click on the Details for the phone-type field.
5. On the phone-type details page, note down the name of the picklist associated with this field. By default, it
is ecPhoneType . However it may be different in your tenant.
5. Note down the Option ID associated with the Business email. This is the code that we will use with
emailType in the attribute-mapping table.
NOTE
Drop the comma character when you copy over the value. For e.g. if the Option ID value is 8,448, then set the
emailType in Azure AD to the constant number 8448 (without the comma character).
4. On the phone type picklist page, review the different phone types listed under Picklist Values .
5. Note down the Option ID associated with the Business phone. This is the code that we will use with
businessPhoneType in the attribute-mapping table.
6. Note down the Option ID associated with the Cell phone. This is the code that we will use with
cellPhoneType in the attribute-mapping table.
NOTE
Drop the comma character when you copy over the value. For e.g. if the Option ID value is 10,606, then set the
cellPhoneType in Azure AD to the constant number 10606 (without the comma character).
Configuring SuccessFactors Writeback App
This section provides steps for
Add the provisioning connector app and configure connectivity to SuccessFactors
Configure attribute mappings
Enable and launch user provisioning
Part 1: Add the provisioning connector app and configure connectivity to SuccessFactors
To configure SuccessFactors Writeback :
1. Go to https://portal.azure.com
2. In the left navigation bar, select Azure Active Director y
3. Select Enterprise Applications , then All Applications .
4. Select Add an application , and select the All category.
5. Search for SuccessFactors Writeback , and add that app from the gallery.
6. After the app is added and the app details screen is shown, select Provisioning
7. Change the Provisioning Mode to Automatic
8. Complete the Admin Credentials section as follows:
Admin Username – Enter the username of the SuccessFactors API user account, with the company
ID appended. It has the format: username@companyID
Admin password – Enter the password of the SuccessFactors API user account.
Tenant URL – Enter the name of the SuccessFactors OData API services endpoint. Only enter the
host name of server without http or https. This value should look like: api4.successfactors.com .
Notification Email – Enter your email address, and check the "send email if failure occurs"
checkbox.
NOTE
The Azure AD Provisioning Service sends email notification if the provisioning job goes into a quarantine state.
Click the Test Connection button. If the connection test succeeds, click the Save button at the top. If it
fails, double-check that the SuccessFactors credentials and URL are valid.
Once the credentials are saved successfully, the Mappings section will display the default mapping.
Refresh the page, if the attribute mappings are not visible.
Part 2: Configure attribute mappings
In this section, you will configure how user data flows from SuccessFactors to Active Directory.
1. On the Provisioning tab under Mappings , click Provision Azure Active Director y Users .
2. In the Source Object Scope field, you can select which sets of users in Azure AD should be considered for
write-back, by defining a set of attribute-based filters. The default scope is "all users in Azure AD".
TIP
When you are configuring the provisioning app for the first time, you will need to test and verify your attribute
mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the
scoping filters under Source Object Scope to test your mappings with a few test users from Azure AD. Once you
have verified that the mappings work, then you can either remove the filter or gradually expand it to include more
users.
3. The Target Object Actions field only supports the Update operation.
4. In the mapping table under Attribute mappings section, you can map the following Azure Active
Directory attributes to SuccessFactors. The table below provides guidance on how to map the write-back
attributes.
SUC C ESSFA C TO RS
# A Z URE A D AT T RIB UT E AT T RIB UT E REM A RK S
NOTE
If the Edit attribute list for SuccessFactors option does not show in the Azure portal, use the URL
https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true to access the page.
9. The API expression column in this view displays the JSON Path expressions used by the connector.
10. Update the JSON Path expressions for business phone and cell phone to use the ID value
(businessPhoneType and cellPhoneType) corresponding to your environment.
11. Click Save to save the mappings.
TIP
By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are
errors in the mapping or data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a
best practice, we recommend configuring Source Object Scope filter and testing your attribute mappings with a few test
users before launching the full sync for all users. Once you have verified that the mappings work and are giving you the
desired results, then you can either remove the filter or gradually expand it to include more users.
NOTE
The SuccessFactors Writeback provisioning app does not support "group assignment". Only "user assignment" is
supported.
3. Click Save .
4. This operation will start the initial sync, which can take a variable number of hours depending on how
many users are in the Azure AD tenant and the scope defined for the operation. You can check the progress
bar to the track the progress of the sync cycle.
5. At any time, check the Provisioning logs tab in the Azure portal to see what actions the provisioning
service has performed. The provisioning logs lists all individual sync events performed by the provisioning
service.
6. Once the initial sync is completed, it will write an audit summary report in the Provisioning tab, as shown
below.
Next steps
Deep dive into Azure AD and SAP SuccessFactors integration reference
Learn how to review logs and get reports on provisioning activity
Learn how to configure single sign-on between SuccessFactors and Azure Active Directory
Learn how to integrate other SaaS applications with Azure Active Directory
Learn how to export and import your provisioning configurations
Tutorial: Configure SAP Cloud Platform Identity
Authentication for automatic user provisioning
12/22/2020 • 5 minutes to read • Edit Online
The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Platform Identity
Authentication and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-
provision users and/or groups to SAP Cloud Platform Identity Authentication.
NOTE
This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this
service does, how it works, and frequently asked questions, see Automate user provisioning and deprovisioning to SaaS
applications with Azure Active Directory.
This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview
features, see Supplemental Terms of Use for Microsoft Azure Previews.
Prerequisites
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
An Azure AD tenant
A SAP Cloud Platform Identity Authentication tenant
A user account in SAP Cloud Platform Identity Authentication with Admin permissions.
2. Press the +Add button on the left hand panel in order to add a new administrator to the list. Choose Add
System and enter the name of the system.
NOTE
The admininistrator user in SAP Cloud Platform Identity Authentication must be of type System . Creating a normal
administrator user can lead to unauthorized errors while provisioning.
3. Under Configure Authorizations, switch on the toggle button against Manage Users and Manage Groups .
4. You will receive an email to activate your account and set a password for SAP Cloud Platform Identity
Authentication Ser vice .
5. Copy the User ID and Password . These values will be entered in the Admin Username and Admin
Password fields respectively in the Provisioning tab of your SAP Cloud Platform Identity Authentication
application in the Azure portal.
Add SAP Cloud Platform Identity Authentication from the gallery
Before configuring SAP Cloud Platform Identity Authentication for automatic user provisioning with Azure AD, you
need to add SAP Cloud Platform Identity Authentication from the Azure AD application gallery to your list of
managed SaaS applications.
To add SAP Cloud Platform Identity Authentication from the Azure AD application galler y, perform
the following steps:
1. In the Azure por tal , in the left navigation panel, select Azure Active Director y .
3. To add a new application, select the New application button at the top of the pane.
4. In the search box, enter SAP Cloud Platform Identity Authentication , select SAP Cloud Platform
Identity Authentication in the results panel, and then click the Add button to add the application.
Configuring automatic user provisioning to SAP Cloud Platform Identity
Authentication
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and
disable users and/or groups in SAP Cloud Platform Identity Authentication based on user and/or group
assignments in Azure AD.
TIP
You may also choose to enable SAML-based single sign-on for SAP Cloud Platform Identity Authentication, following the
instructions provided in the SAP Cloud Platform Identity Authentication Single sign-on tutorial. Single sign-on can be
configured independently of automatic user provisioning, though these two features compliment each other
To configure automatic user provisioning for SAP Cloud Platform Identity Authentication in Azure AD:
1. Sign in to the Azure portal. Select Enterprise Applications , then select All applications .
6. In the Notification Email field, enter the email address of a person or group who should receive the
provisioning error notifications and check the checkbox - Send an email notification when a failure
occurs .
7. Click Save .
8. Under the Mappings section, select Synchronize Azure Active Director y Users to SAP Cloud
Platform Identity Authentication .
9. Review the user attributes that are synchronized from Azure AD to SAP Cloud Platform Identity
Authentication in the Attribute Mapping section. The attributes selected as Matching properties are used
to match the user accounts in SAP Cloud Platform Identity Authentication for update operations. Select the
Save button to commit any changes.
10. To configure scoping filters, refer to the following instructions provided in the Scoping filter tutorial.
11. To enable the Azure AD provisioning service for SAP Cloud Platform Identity Authentication, change the
Provisioning Status to On in the Settings section.
12. Define the users and/or groups that you would like to provision to SAP Cloud Platform Identity
Authentication by choosing the desired values in Scope in the Settings section.
13. When you are ready to provision, click Save .
This operation starts the initial synchronization of all users and/or groups defined in Scope in the Settings
section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40
minutes as long as the Azure AD provisioning service is running. You can use the Synchronization Details
section to monitor progress and follow links to provisioning activity report, which describes all actions performed
by the Azure AD provisioning service on SAP Cloud Platform Identity Authentication.
For more information on how to read the Azure AD provisioning logs, see Reporting on automatic user account
provisioning.
Connector limitations
SAP Cloud Platform Identity Authentication's SCIM endpoint requires certain attributes to be of specific format.
You can know more about these attributes and their specific format here.
Additional resources
Managing user account provisioning for Enterprise Apps
What is application access and single sign-on with Azure Active Directory?
Next steps
Learn how to review logs and get reports on provisioning activity
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Cloud Platform Identity
Authentication
12/22/2020 • 9 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SAP Cloud Platform Identity Authentication with Azure Active Directory
(Azure AD). When you integrate SAP Cloud Platform Identity Authentication with Azure AD, you can:
Control in Azure AD who has access to SAP Cloud Platform Identity Authentication.
Enable your users to be automatically signed-in to SAP Cloud Platform Identity Authentication with their Azure
AD accounts.
Manage your accounts in one central location - the Azure portal.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Cloud Platform Identity Authentication single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP Cloud Platform Identity Authentication supports SP and IDP initiated SSO
Before you dive into the technical details, it's vital to understand the concepts you're going to look at. The SAP
Cloud Platform Identity Authentication and Active Directory Federation Services enable you to implement SSO
across applications or services that are protected by Azure AD (as an IdP) with SAP applications and services that
are protected by SAP Cloud Platform Identity Authentication.
Currently, SAP Cloud Platform Identity Authentication acts as a Proxy Identity Provider to SAP applications. Azure
Active Directory in turn acts as the leading Identity Provider in this setup.
The following diagram illustrates this relationship:
With this setup, your SAP Cloud Platform Identity Authentication tenant is configured as a trusted application in
Azure Active Directory.
All SAP applications and services that you want to protect this way are subsequently configured in the SAP Cloud
Platform Identity Authentication management console.
Therefore, the authorization for granting access to SAP applications and services needs to take place in SAP Cloud
Platform Identity Authentication (as opposed to Azure Active Directory).
By configuring SAP Cloud Platform Identity Authentication as an application through the Azure Active Directory
Marketplace, you don't need to configure individual claims or SAML assertions.
NOTE
Currently only Web SSO has been tested by both parties. The flows that are necessary for App-to-API or API-to-API
communication should work but have not been tested yet. They will be tested during subsequent activities.
Configure and test Azure AD SSO for SAP Cloud Platform Identity
Authentication
Configure and test Azure AD SSO with SAP Cloud Platform Identity Authentication using a test user called
B.Simon . For SSO to work, you need to establish a link relationship between an Azure AD user and the related
user in SAP Cloud Platform Identity Authentication.
To configure and test Azure AD SSO with SAP Cloud Platform Identity Authentication, perform the following steps:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP Cloud Platform Identity Authentication SSO - to configure the single sign-on settings on
application side.
a. Create SAP Cloud Platform Identity Authentication test user - to have a counterpart of B.Simon
in SAP Cloud Platform Identity Authentication that is linked to the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.
4. On the Basic SAML Configuration section, if you wish to configure in IDP -initiated mode, perform the
following steps:
a. In the Identifier text box, type a URL using the following pattern: <IAS-tenant-id>.accounts.ondemand.com
b. In the Reply URL text box, type a URL using the following pattern:
https://<IAS-tenant-id>.accounts.ondemand.com/saml2/idp/acs/<IAS-tenant-id>.accounts.ondemand.com
NOTE
These values are not real. Update these values with the actual identifier and Reply URL. Contact the SAP Cloud
Platform Identity Authentication Client support team to get these values. If you don't understand Identifier value,
read the SAP Cloud Platform Identity Authentication documentation about Tenant SAML 2.0 configuration.
5. Click Set additional URLs and perform the following step if you wish to configure the application in SP -
initiated mode:
In the Sign-on URL text box, type a URL using the following pattern: {YOUR BUSINESS APPLICATION URL}
NOTE
This value is not real. Update this value with the actual sign-on URL. Please use your specific business application
Sign-on URL. Contact the SAP Cloud Platform Identity Authentication Client support team if you have any doubt.
6. SAP Cloud Platform Identity Authentication application expects the SAML assertions in a specific format,
which requires you to add custom attribute mappings to your SAML token attributes configuration. The
following screenshot shows the list of default attributes.
7. In addition to above, SAP Cloud Platform Identity Authentication application expects few more attributes to
be passed back in SAML response which are shown below. These attributes are also pre populated but you
can review them as per your requirements.
firstName user.givenname
8. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Metadata XML from the given options as per your requirement and save it on
your computer.
9. On the Set up SAP Cloud Platform Identity Authentication section, copy the appropriate URL(s) as per
your requirement.
2. After adding extension to the browser, click on Set up SAP Cloud Platform Identity Authentication will
direct you to the SAP Cloud Platform Identity Authentication application. From there, provide the admin
credentials to sign into SAP Cloud Platform Identity Authentication. The browser extension will automatically
configure the application for you and automate steps 3-7.
3. If you want to setup SAP Cloud Platform Identity Authentication manually, in a different web browser
window, go to the SAP Cloud Platform Identity Authentication administration console. The URL has the
following pattern: https://<tenant-id>.accounts.ondemand.com/admin . Then read the documentation about
SAP Cloud Platform Identity Authentication at Integration with Microsoft Azure AD.
4. In the Azure portal, select the Save button.
5. Continue with the following only if you want to add and enable SSO for another SAP application. Repeat the
steps under the section Adding SAP Cloud Platform Identity Authentication from the galler y .
6. In the Azure portal, on the SAP Cloud Platform Identity Authentication application integration page,
select Linked Sign-on .
NOTE
The new application leverages the single sign-on configuration of the previous SAP application. Make sure you use the same
Corporate Identity Providers in the SAP Cloud Platform Identity Authentication administration console.
Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
SP initiated:
Click on Test this application in Azure portal. This will redirect to SAP Cloud Platform Identity
Authentication Sign on URL where you can initiate the login flow.
Go to SAP Cloud Platform Identity Authentication Sign-on URL directly and initiate the login flow from there.
IDP initiated:
Click on Test this application in Azure portal and you should be automatically signed in to the SAP Cloud
Platform Identity Authentication for which you set up the SSO
You can also use Microsoft My Apps to test the application in any mode. When you click the SAP Cloud Platform
Identity Authentication tile in the My Apps, if configured in SP mode you would be redirected to the application
sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to
the SAP Cloud Platform Identity Authentication for which you set up the SSO. For more information about the My
Apps, see Introduction to the My Apps.
Next steps
Once you configure the SAP Cloud Platform Identity Authentication you can enforce session controls, which
protects exfiltration and infiltration of your organization’s sensitive data in real time. Session controls extends from
Conditional Access. Learn how to enforce session control with Microsoft Cloud App Security
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SuccessFactors
11/2/2020 • 6 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SuccessFactors with Azure Active Directory (Azure AD). When you
integrate SuccessFactors with Azure AD, you can:
Control in Azure AD who has access to SuccessFactors.
Enable your users to be automatically signed-in to SuccessFactors with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SuccessFactors single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SuccessFactors supports SP initiated SSO.
Once you configure the SuccessFactors you can enforce session controls, which protect exfiltration and
infiltration of your organization’s sensitive data in real-time. Session controls extend from Conditional Access.
Learn how to enforce session control with Microsoft Cloud App Security
NOTE
These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Contact
SuccessFactors Client support team to get these values.
5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, find
Cer tificate (Base64) and select Download to download the certificate and save it on your computer.
6. On the Set up SuccessFactors section, copy the appropriate URL(s) based on your requirement.
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.
NOTE
This value is used as the on/off switch. If any value is saved, the SAML SSO is ON. If a blank value is saved the SAML
SSO is OFF.
4. Native to below screenshot and perform the following actions:
NOTE
The certificate content must have begin certificate and end certificate tags.
NOTE
If you try to enable this, the system checks if it creates a duplicate SAML login name. For example if the customer
has usernames User1 and user1. Taking away case sensitivity makes these duplicates. The system gives you an error
message and does not enable the feature. The customer needs to change one of the usernames so it’s spelled
different.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SuccessFactors with Azure AD
What is session control in Microsoft Cloud App Security?
How to protect SuccessFactors with advanced visibility and controls
Tutorial: Integrate SAP Analytics Cloud with Azure
Active Directory
12/22/2020 • 6 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SAP Analytics Cloud with Azure Active Directory (Azure AD). When you
integrate SAP Analytics Cloud with Azure AD, you can:
Control in Azure AD who has access to SAP Analytics Cloud.
Enable your users to be automatically signed-in to SAP Analytics Cloud with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Analytics Cloud single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Analytics Cloud supports SP initiated SSO
4. On the Basic SAML Configuration section, enter the values for the following fields:
a. In the Sign on URL text box, type a URL using the following pattern:
https://<sub-domain>.sapanalytics.cloud/
https://<sub-domain>.sapbusinessobjects.cloud/
b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
<sub-domain>.sapbusinessobjects.cloud
<sub-domain>.sapanalytics.cloud
NOTE
The values in these URLs are for demonstration only. Update the values with the actual sign-on URL and identifier
URL. To get the sign-on URL, contact the SAP Analytics Cloud Client support team. You can get the identifier URL by
downloading the SAP Analytics Cloud metadata from the admin console. This is explained later in the tutorial.
5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
6. On the Set up SAP Analytics Cloud section, copy the appropriate URL(s) based on your requirement.
6. To upload the service provider metadata (Step 2) in the file that you downloaded from the Azure portal,
under Upload Identity Provider metadata , select Upload .
7. In the User Attribute list, select the user attribute (Step 3) that you want to use for your implementation.
This user attribute maps to the identity provider. To enter a custom attribute on the user's page, use the
Custom SAML Mapping option. Or, you can select either Email or USER ID as the user attribute. In our
example, we selected Email because we mapped the user identifier claim with the userprincipalname
attribute in the User Attributes & Claims section in the Azure portal. This provides a unique user email,
which is sent to the SAP Analytics Cloud application in every successful SAML response.
8. To verify the account with the identity provider (Step 4), in the Login Credential (Email) box, enter the
user's email address. Then, select Verify Account . The system adds sign-in credentials to the user account.
9. Select the Save icon.
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.
Create SAP Analytics Cloud test user
Azure AD users must be provisioned in SAP Analytics Cloud before they can sign in to SAP Analytics Cloud. In SAP
Analytics Cloud, provisioning is a manual task.
To provision a user account:
1. Sign in to your SAP Analytics Cloud company site as an administrator.
2. Select Menu > Security > Users .
Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Fiori
11/2/2020 • 8 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SAP Fiori with Azure Active Directory (Azure AD). When you integrate
SAP Fiori with Azure AD, you can:
Control in Azure AD who has access to SAP Fiori.
Enable your users to be automatically signed-in to SAP Fiori with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Fiori single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Fiori supports SP initiated SSO
NOTE
For SAP Fiori initiated iFrame Authentication, we recommend using the IsPassive parameter in the SAML AuthnRequest for
silent authentication. For more details of the IsPassive parameter refer to Azure AD SAML single sign-on information
login/create_sso2_ticket = 2
login/accept_sso2_ticket = 1
login/ticketcache_entries_max = 1000
login/ticketcache_off = 0 login/ticket_only_by_https = 0
icf/set_HTTPonly_flag_on_cookies = 3
icf/user_recheck = 0 http/security_session_timeout = 1800
http/security_context_cache_size = 2500
rdisp/plugin_auto_logout = 1800
rdisp/autothtime = 60
NOTE
Adjust the parameters based on your organization requirements. The preceding parameters are given only as
an example.
b. If necessary, adjust parameters in the instance (default) profile of the SAP system and restart the SAP
system.
c. Double-click the relevant client to enable an HTTP security session.
d. Activate the following SICF services:
/sap/public/bc/sec/saml2
/sap/public/bc/sec/cdc_ext_service
/sap/bc/webdynpro/sap/saml2
/sap/bc/webdynpro/sap/sec_diag_tool (This is only to enable / disable trace)
4. Go to transaction code SAML2 in Business Client for SAP system [T01/122 ]. The configuration UI opens in
a new browser window. In this example, we use Business Client for SAP system 122.
NOTE
By default, the provider name is in the format <sid><client>. Azure AD expects the name in the format
<protocol>://<name>. We recommend that you maintain the provider name as https://<sid><client> so you can
configure multiple SAP Fiori ABAP engines in Azure AD.
7. Select Local Provider tab > Metadata .
8. In the SAML 2.0 Metadata dialog box, download the generated metadata XML file and save it on your
computer.
9. In the Azure portal, on the SAP Fiori application integration page, find the Manage section and select
single sign-on .
10. On the Select a single sign-on method page, select SAML .
11. On the Set up single sign-on with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.
12. On the Basic SAML Configuration section, if you have Ser vice Provider metadata file , perform the
following steps:
a. Click Upload metadata file .
b. Click on folder logo to select the metadata file and click Upload .
c. When the metadata file is successfully uploaded, the Identifier and Reply URL values are automatically
populated in the Basic SAML Configuration pane. In the Sign on URL box, enter a URL that has the
following pattern: https:\//\<your company instance of SAP Fiori\> .
NOTE
A few customers report errors related to incorrectly configured Reply URL values. If you see this error, you can use
the following PowerShell script to set the correct Reply URL for your instance:
You can set the ServicePrincipal object ID yourself before running the script, or you can pass it here.
13. The SAP Fiori application expects the SAML assertions to be in a specific format. Configure the following
claims for this application. To manage these attribute values, in the Set up Single Sign-On with SAML
pane, select Edit .
14. In the User Attributes & Claims pane, configure the SAML token attributes as shown in the preceding
image. Then, complete the following steps:
a. Select Edit to open the Manage user claims pane.
b. In the Transformation list, select ExtractMailPrefix() .
c. In the Parameter 1 list, select user.userprincipalname .
d. Select Save .
15. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
16. On the Set up SAP Fiori section, copy the appropriate URL(s) based on your requirement.
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.
3. Select Add , and then select Upload Metadata File from the context menu.
4. Upload the metadata file that you downloaded in the Azure portal. Select Next .
5. On the next page, in the Alias box, enter the alias name. For example, aadsts . Select Next .
6. Make sure that the value in the Digest Algorithm box is SHA-256 . Select Next .
7. Under Single Sign-On Endpoints , select HTTP POST , and then select Next .
8. Under Single Logout Endpoints , select HTTP Redirect , and then select Next .
9. Under Ar tifact Endpoints , select Next to continue.
13. In the Suppor ted NameID Formats dialog box, select Unspecified . Select OK .
The values for User ID Source and User ID Mapping Mode determine the link between the SAP user and
the Azure AD claim.
Scenario 1 : SAP user to Azure AD user mapping
a. In SAP, under Details of NameID Format "Unspecified" , note the details:
b. In the Azure portal, under User Attributes & Claims , note the required claims from Azure AD.
Scenario 2 : Select the SAP user ID based on the configured email address in SU01. In this case, the email ID
should be configured in SU01 for each user who requires SSO.
a. In SAP, under Details of NameID Format "Unspecified" , note the details:
b. In the Azure portal, under User Attributes & Claims , note the required claims from Azure AD.
14. Select Save , and then select Enable to enable the identity provider.
15. Select OK when prompted.
Test SSO
1. After the identity provider Azure AD is activated in SAP Fiori, try to access one of the following URLs to test
single sign-on (you shouldn't be prompted for a username and password):
https://<sapurl>/sap/bc/bsp/sap/it00/default.htm
https://<sapurl>/sap/bc/bsp/sap/it00/default.htm
NOTE
Replace sapurl with the actual SAP host name.
2. The test URL should take you to the following test application page in SAP. If the page opens, Azure AD single
sign-on is successfully set up.
3. If you are prompted for a username and password, enable trace to help diagnose the issue. Use the
following URL for the trace: https://<sapurl>/sap/bc/webdynpro/sap/sec_diag_tool?sap-client=122&sap-
language=EN#.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SAP Fiori with Azure AD
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Qualtrics
11/2/2020 • 5 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SAP Qualtrics with Azure Active Directory (Azure AD). When you
integrate SAP Qualtrics with Azure AD, you can:
Control in Azure AD who has access to SAP Qualtrics.
Enable your users to be automatically signed in to SAP Qualtrics with their Azure AD accounts.
Manage your accounts in one central location: the Azure portal.
To learn more about software as a service (SaaS) app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory.
Prerequisites
To get started, you need:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
A SAP Qualtrics subscription enabled for single sign-on (SSO).
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Qualtrics supports SP and IDP initiated SSO.
SAP Qualtrics supports Just In Time user provisioning.
After you configure SAP Qualtrics, you can enforce session control, which protects exfiltration and infiltration of
your organization’s sensitive data in real time. Session control extends from conditional access. For more
information, see Learn how to enforce session control with Microsoft Cloud App Security.
4. On the Set up single sign-on with SAML page, if you want to configure the application in IDP initiated
mode, enter the values for the following fields:
a. In the Identifier text box, type a URL that uses the following pattern:
https://< DATACENTER >.qualtrics.com
b. In the Reply URL text box, type a URL that uses the following pattern:
https://< DATACENTER >.qualtrics.com/login/v1/sso/saml2/default-sp
c. In the Relay State text box, type a URL that uses the following pattern:
https://< brandID >.< DATACENTER >.qualtrics.com
5. Select Set additional URLs , and perform the following step if you want to configure the application in SP
initiated mode:
In the Sign-on URL textbox, type a URL that uses the following pattern:
https://< brandID >.< DATACENTER >.qualtrics.com
NOTE
These values are not real. Update these values with the actual Sign-on URL, Identifier, Reply URL, and Relay State. To
get these values, contact the Qualtrics Client support team. You can also refer to the patterns shown in the Basic
SAML Configuration section in the Azure portal.
6. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, select the
copy icon to copy App Federation Metadata Url and save it on your computer.
4. Select Add user . Then in the Add Assignment dialog box, select Users and groups .
5. In the Users and groups dialog box, select B.Simon from the list of users. Then choose Select at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog box, select the
appropriate role for the user from the list. Then choose Select at the bottom of the screen.
7. In the Add Assignment dialog box, select Assign .
Test SSO
In this section, you test your Azure AD single sign-on configuration by using Access Panel.
When you select the SAP Qualtrics tile in Access Panel, you're automatically signed in to the SAP Qualtrics for which
you set up SSO. For more information, see Sign in and start apps from the My Apps portal.
Additional resources
Tutorials for integrating SaaS applications with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SAP Qualtrics with Azure AD
What is session control in Microsoft Cloud App Security?
Protect SAP Qualtrics with advanced visibility and controls
Tutorial: Azure Active Directory integration with Ariba
11/2/2020 • 5 minutes to read • Edit Online
In this tutorial, you learn how to integrate Ariba with Azure Active Directory (Azure AD). Integrating Ariba with
Azure AD provides you with the following benefits:
You can control in Azure AD who has access to Ariba.
You can enable your users to be automatically signed-in to Ariba (Single Sign-On) with their Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.
Prerequisites
To configure Azure AD integration with Ariba, you need the following items:
An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial here
Ariba single sign-on enabled subscription
Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
Ariba supports SP initiated SSO
Once you configure Ariba you can enforce Session control, which protects exfiltration and infiltration of your
organization’s sensitive data in real time. Session control extends from Conditional Access. Learn how to
enforce session control with Microsoft Cloud App Security
a. In the Sign on URL text box, type a URL using the following pattern:
https://<subdomain>.sourcing.ariba.com
https://<subdomain>.supplier.ariba.com
b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
http://<subdomain>.procurement-2.ariba.com
https://<subdomain>.ariba.com/CUSTOM_URL
https://<subdomain>.procurement-eu.ariba.com/CUSTOM_URL
https://<subdomain>.procurement-eu.ariba.com
https://<subdomain>.procurement-2.ariba.com
https://<subdomain>.procurement-2.ariba.com/CUSTOM_URL
NOTE
These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Here we suggest
you to use the unique value of string in the Identifier. Contact Ariba Client support team at 1-866-218-2155 to get
these values.. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure
portal.
5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Cer tificate (Base64) from the given options as per your requirement and
save it on your computer.
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the Ariba tile in the Access Panel, you should be automatically signed in to the Ariba for which you
set up SSO. For more information about the Access Panel, see Introduction to the Access Panel.
Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory single sign-on (SSO)
integration with Concur Travel and Expense
12/22/2020 • 7 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate Concur Travel and Expense with Azure Active Directory (Azure AD).
When you integrate Concur Travel and Expense with Azure AD, you can:
Control in Azure AD who has access to Concur Travel and Expense.
Enable your users to be automatically signed-in to Concur Travel and Expense with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
Concur Travel and Expense subscription.
A "Company Administrator" role under your Concur user account. You can test if you have the right access by
going to Concur SSO Self-Service Tool. If you do not have the access, please contact Concur support or
implementation project manager.
Scenario description
In this tutorial, you configure and test Azure AD SSO.
Concur Travel and Expense supports IDP and SP initiated SSO
Concur Travel and Expense supports testing SSO in both production and implementation environment
NOTE
Identifier of this application is a fixed string value for each of the three regions: US, EMEA, and China. So only one instance
can be configured for each region in one tenant.
Configure and test Azure AD SSO for Concur Travel and Expense
Configure and test Azure AD SSO with Concur Travel and Expense using a test user called B.Simon . For SSO to
work, you need to establish a link relationship between an Azure AD user and the related user in Concur Travel and
Expense.
To configure and test Azure AD SSO with Concur Travel and Expense, perform the following steps:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure Concur Travel and Expense SSO - to configure the single sign-on settings on application side.
a. Create Concur Travel and Expense test user - to have a counterpart of B.Simon in Concur Travel and
Expense that is linked to the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.
4. On the Basic SAML Configuration section the application is pre-configured in IDP initiated mode and the
necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking
the Save button.
NOTE
Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL) are region specific. Please select based on the
datacenter of your Concur entity. If you do not know the datacenter of your Concur entity, please contact Concur
support.
5. On the Set up Single Sign-On with SAML page, click the edit/pen icon for User Attribute to edit the
settings. The Unique User Identifier needs to match Concur user login_id. Usually, you should change
user.userprincipalname to user.mail .
6. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the metadata and save it on your
computer.
2. After adding extension to the browser, click on Set up Concur Travel and Expense will direct you to the
Concur Travel and Expense application. From there, provide the admin credentials to sign into Concur Travel
and Expense. The browser extension will automatically configure the application for you and automate steps
3-7.
3. If you want to setup Concur Travel and Expense manually, in a different web browser window, you need to
upload the downloaded Federation Metadata XML to Concur SSO Self-Service Tool and sign in to your
Concur Travel and Expense company site as an administrator.
4. Click Add .
5. Enter a custom name for your IdP, for example "Azure AD (US)".
6. Click Upload XML File and attach Federation Metadata XML you downloaded previously.
7. Click Add Metadata to save the change.
NOTE
B.Simon's Concur login id needs to match B.Simon's unique identifier at Azure AD. For example, if B.Simon's Azure AD unique
identifer is [email protected] . B.Simon's Concur login id needs to be [email protected] as well.
NOTE
Self-Service option to configure SSO is not available so work with Concur support team to enable mobile SSO.
Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
SP initiated:
Click on Test this application in Azure portal. This will redirect to Concur Travel and Expense Sign on URL
where you can initiate the login flow.
Go to Concur Travel and Expense Sign-on URL directly and initiate the login flow from there.
IDP initiated:
Click on Test this application in Azure portal and you should be automatically signed in to the Concur Travel
and Expense for which you set up the SSO
You can also use Microsoft My Apps to test the application in any mode. When you click the Concur Travel and
Expense tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for
initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Concur Travel
and Expense for which you set up the SSO. For more information about the My Apps, see Introduction to the My
Apps.
Next steps
Once you configure Concur Travel and Expense you can enforce session control, which protects exfiltration and
infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. Learn
how to enforce session control with Microsoft Cloud App Security.
Tutorial: Azure Active Directory integration with SAP
Cloud Platform
11/2/2020 • 8 minutes to read • Edit Online
In this tutorial, you learn how to integrate SAP Cloud Platform with Azure Active Directory (Azure AD). Integrating
SAP Cloud Platform with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Cloud Platform.
You can enable your users to be automatically signed-in to SAP Cloud Platform (Single Sign-On) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.
Prerequisites
To configure Azure AD integration with SAP Cloud Platform, you need the following items:
An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial here
SAP Cloud Platform single sign-on enabled subscription
After completing this tutorial, the Azure AD users you have assigned to SAP Cloud Platform will be able to single
sign into the application using the Introduction to the Access Panel.
IMPORTANT
You need to deploy your own application or subscribe to an application on your SAP Cloud Platform account to test single
sign on. In this tutorial, an application is deployed in the account.
Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP Cloud Platform supports SP initiated SSO
3. To add new application, click New application button on the top of dialog.
4. In the search box, type SAP Cloud Platform , select SAP Cloud Platform from result panel then click Add
button to add the application.
2. On the Select a Single sign-on method dialog, select SAML/WS-Fed mode to enable single sign-on.
3. On the Set up Single Sign-On with SAML page, click Edit icon to open Basic SAML Configuration
dialog.
4. On the Basic SAML Configuration section, perform the following steps:
a. In the Sign On URL textbox, type the URL used by your users to sign into your SAP Cloud Platform
application. This is the account-specific URL of a protected resource in your SAP Cloud Platform application.
The URL is based on the following pattern:
https://<applicationName><accountName>.<landscape host>.ondemand.com/<path_to_protected_resource>
NOTE
This is the URL in your SAP Cloud Platform application that requires the user to authenticate.
https://<subdomain>.hanatrial.ondemand.com/<instancename>
https://<subdomain>.hana.ondemand.com/<instancename>
b. In the Identifier textbox you will provide your SAP Cloud Platform's type a URL using one of the
following patterns:
https://hanatrial.ondemand.com/<instancename>
https://hana.ondemand.com/<instancename>
https://us1.hana.ondemand.com/<instancename>
https://ap1.hana.ondemand.com/<instancename>
c. In the Reply URL textbox, type a URL using the following pattern:
https://<subdomain>.hanatrial.ondemand.com/<instancename>
https://<subdomain>.hana.ondemand.com/<instancename>
https://<subdomain>.us1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.us1.hana.ondemand.com/<instancename>
https://<subdomain>.ap1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.ap1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.hana.ondemand.com/<instancename>
NOTE
These values are not real. Update these values with the actual Sign-On URL, Identifier, and Reply URL. Contact SAP
Cloud Platform Client support team to get Sign-On URL and Identifier. Reply URL you can get from trust
management section which is explained later in the tutorial.
5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Federation Metadata XML from the given options as per your requirement
and save it on your computer.
3. In the Trust Management section, under Local Ser vice Provider , perform the following steps:
a. Click Edit .
b. As Configuration Type , select Custom .
c. As Local Provider Name , leave the default value. Copy this value and paste it into the Identifier field in
the Azure AD configuration for SAP Cloud Platform.
d. To generate a Signing Key and a Signing Cer tificate key pair, click Generate Key Pair .
e. As Principal Propagation , select Disabled .
f. As Force Authentication , select Disabled .
g. Click Save .
4. After saving the Local Ser vice Provider settings, perform the following to obtain the Reply URL:
a. Download the SAP Cloud Platform metadata file by clicking Get Metadata .
b. Open the downloaded SAP Cloud Platform metadata XML file, and then locate the
ns3:Asser tionConsumerSer vice tag.
c. Copy the value of the Location attribute, and then paste it into the Reply URL field in the Azure AD
configuration for SAP Cloud Platform.
5. Click the Trusted Identity Provider tab, and then click Add Trusted Identity Provider .
NOTE
To manage the list of trusted identity providers, you need to have chosen the Custom configuration type in the Local
Service Provider section. For Default configuration type, you have a non-editable and implicit trust to the SAP ID
Service. For None, you don't have any trust settings.
6. Click the General tab, and then click Browse to upload the downloaded metadata file.
NOTE
After uploading the metadata file, the values for Single Sign-on URL , Single Logout URL , and Signing
Cer tificate are populated automatically.
a. Click Add Asser tion-Based Attribute , and then add the following assertion-based attributes:
A SSERT IO N AT T RIB UT E P RIN C IPA L AT T RIB UT E
firstname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname
lastname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname
email
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
NOTE
The configuration of the Attributes depends on how the application(s) on SCP are developed, that is, which
attribute(s) they expect in the SAML response and under which name (Principal Attribute) they access this attribute in
the code.
b. The Default Attribute in the screenshot is just for illustration purposes. It is not required to make the
scenario work.
c. The names and values for Principal Attribute shown in the screenshot depend on how the application is
developed. It is possible that your application requires different mappings.
Assertion-based groups
As an optional step, you can configure assertion-based groups for your Azure Active Directory Identity Provider.
Using groups on SAP Cloud Platform allows you to dynamically assign one or more users to one or more roles in
your SAP Cloud Platform applications, determined by values of attributes in the SAML 2.0 assertion.
For example, if the assertion contains the attribute "contract=temporary", you may want all affected users to be
added to the group "TEMPORARY". The group "TEMPORARY" may contain one or more roles from one or more
applications deployed in your SAP Cloud Platform account.
Use assertion-based groups when you want to simultaneously assign many users to one or more roles of
applications in your SAP Cloud Platform account. If you want to assign only a single or small number of users to
specific roles, we recommend assigning them directly in the “Authorizations ” tab of the SAP Cloud Platform
cockpit.
Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
1. In the Azure portal, in the left pane, select Azure Active Director y , select Users , and then select All users .
4. Click the Add user button, then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog select Britta Simon in the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting any role value in the SAML assertion then in the Select Role dialog select the
appropriate role for the user from the list, then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog click the Assign button.
Create SAP Cloud Platform test user
In order to enable Azure AD users to log in to SAP Cloud Platform, you must assign roles in the SAP Cloud Platform
to them.
To assign a role to a user, perform the following steps:
1. Log in to your SAP Cloud Platform cockpit.
2. Perform the following:
a. Click Authorization .
b. Click the Users tab.
c. In the User textbox, type the user’s email address.
d. Click Assign to assign the user to a role.
e. Click Save .
Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud Platform tile in the Access Panel, you should be automatically signed in to the SAP
Cloud Platform for which you set up SSO. For more information about the Access Panel, see Introduction to the
Access Panel.
Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory Single sign-on (SSO)
integration with SAP NetWeaver
11/2/2020 • 11 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SAP NetWeaver with Azure Active Directory (Azure AD). When you
integrate SAP NetWeaver with Azure AD, you can:
Control in Azure AD who has access to SAP NetWeaver.
Enable your users to be automatically signed-in to SAP NetWeaver with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP NetWeaver single sign-on (SSO) enabled subscription.
SAP NetWeaver V7.20 required atleast
Scenario description
SAP NetWeaver supports both SAML (SP initiated SSO ) and OAuth . In this tutorial, you configure and test
Azure AD SSO in a test environment.
NOTE
Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
NOTE
Configure the application either in SAML or in OAuth as per your organizational requirement.
login/create_sso2_ticket = 2
login/accept_sso2_ticket = 1
login/ticketcache_entries_max = 1000
login/ticketcache_off = 0 login/ticket_only_by_https = 0
icf/set_HTTPonly_flag_on_cookies = 3
icf/user_recheck = 0 http/security_session_timeout = 1800
http/security_context_cache_size = 2500
rdisp/plugin_auto_logout = 1800
rdisp/autothtime = 60
NOTE
Adjust above parameters as per your organization requirements, Above parameters are given here as indication only.
b. If necessary adjust parameters, in the instance/default profile of SAP system and restart SAP system.
c. Double-click on relevant client to enable HTTP security session.
d. Activate below SICF services:
/sap/public/bc/sec/saml2
/sap/public/bc/sec/cdc_ext_service
/sap/bc/webdynpro/sap/saml2
/sap/bc/webdynpro/sap/sec_diag_tool (This is only to enable / disable trace)
4. Go to Transaction code SAML2 in business client of SAP system [T01/122]. It will open a user interface in a
browser. In this example, we assumed 122 as SAP business client.
5. Provide your username and password to enter in user interface and click Edit .
6. Replace Provider Name from T01122 to http://T01122 and click on Save .
NOTE
By default provider name come as <sid><client> format but Azure AD expects name in the format of
<protocol>://<name> , recommending to maintain provider name as https://<sid><client> to allow multiple
SAP NetWeaver ABAP engines to configure in Azure AD.
7. Generating Ser vice Provider Metadata :- Once we are done with configuring the Local Provider and
Trusted Providers settings on SAML 2.0 User Interface, the next step would be to generate the service
provider’s metadata file (which would contain all the settings, authentication contexts and other
configurations in SAP). Once this file is generated we need to upload this in Azure AD.
4. On the Basic SAML Configuration section, if you wish to configure the application in IDP initiated mode,
perform the following step:
a. Click Upload metadata file to upload the Ser vice Provider metadata file , which you have obtained
earlier.
b. Click on folder logo to select the metadata file and click Upload .
c. After the metadata file is successfully uploaded, the Identifier and Reply URL values get auto populated
in Basic SAML Configuration section textbox as shown below:
d. In the Sign-on URL text box, type a URL using the following pattern:
https://<your company instance of SAP NetWeaver>
NOTE
We have seen few customers reporting an error of incorrect Reply URL configured for their instance. If you receive any
such error, you can use following PowerShell script as a work around to set the correct Reply URL for your instance.:
ServicePrincipal Object ID is to be set by yourself first or you can pass that also here.
5. SAP NetWeaver application expects the SAML assertions in a specific format, which requires you to add
custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the
list of default attributes. Click Edit icon to open User Attributes dialog.
6. In the User Claims section on the User Attributes dialog, configure SAML token attribute as shown in the
image above and perform the following steps:
a. Click Edit icon to open the Manage user claims dialog.
3. Press Add and select Upload Metadata File from the context menu.
4. Upload metadata file, which you have downloaded from the Azure portal.
5. In the next screen type the Alias name. For example, aadsts and press Next to continue.
6. Make sure that your Digest Algorithm should be SHA-256 and don’t require any changes and press
Next .
7. On Single Sign-On Endpoints , use HTTP POST and click Next to continue.
11. Go to tab Trusted Provider > Identity Federation (from bottom of the screen). Click Edit .
12. Click Add under the Identity Federation tab (bottom window).
13. From the pop-up window, select Unspecified from the Suppor ted NameID formats and click OK.
14. Note that user ID Source and user ID mapping mode values determine the link between SAP user and
Azure AD claim.
Scenario: SAP User to Azure AD user mapping.
a. NameID details screenshot from SAP.
b. Screenshot mentioning Required claims from Azure AD.
Scenario: Select SAP user ID based on configured email address in SU01. In this case email ID should be configured in su01 for
each user who requires SSO.
a. NameID details screenshot from SAP.
15. Click Save and then click Enable to enable identity provider.
Test SSO
1. Once the identity provider Azure AD was activated, try accessing below URL to check SSO (there will no
prompt for username & password)
https://<sapurl>/sap/bc/bsp/sap/it00/default.htm
NOTE
Replace sapurl with actual SAP hostname.
2. The above URL should take you to below mentioned screen. If you are able to reach up to the below page,
Azure AD SSO setup is successfully done.
3. If username & password prompt occurs, please diagnose the issue by enable the trace using below URL
https://<sapurl>/sap/bc/webdynpro/sap/sec_diag_tool?sap-client=122&sap-language=EN#
Then click pushbutton OAuth on the top button bar and assign scope (keep default name as offered).
4. For our example the scope is DAAG_MNGGRP_001 , it is generated from the service name by automatically
adding a number. Report /IWFND/R_OAUTH_SCOPES can be used to change name of scope or create manually.
NOTE
Message soft state status is not supported – can be ignored, as no problem. For more details, refer here
NOTE
For more details, refer OAuth 2.0 Client Registration for the SAML Bearer Grant Type here
3. tcod: SU01 / create user CLIENT1 as System type and assign password, save it as need to provide the
credential to the API programmer, who should burn it with the username to the calling code. No profile or
role should be assigned.
Register the new OAuth 2.0 Client ID with the creation wizard
1. To register a new OAuth 2.0 client start transaction SOAUTH2 . The transaction will display an overview
about the OAuth 2.0 clients that were already registered. Choose Create to start the wizard for the new
OAuth client named as CLIENT1in this example.
2. Go to T-Code: SOAUTH2 and Provide the description then click next .
3. Select the already added SAML2 IdP – Azure AD from the dropdown list and save.
4. Click on Add under scope assignment to add the previously created scope: DAAG_MNGGRP_001
5. Click finish .
Next Steps
Once you configure Azure AD SAP NetWeaver you can enforce Session Control, which protects exfiltration and
infiltration of your organization’s sensitive data in real time. Session Control extends from Conditional Access.
Learn how to enforce session control with Microsoft Cloud App Security
Tutorial: Azure Active Directory integration with SAP
Business ByDesign
11/2/2020 • 7 minutes to read • Edit Online
In this tutorial, you learn how to integrate SAP Business ByDesign with Azure Active Directory (Azure AD).
Integrating SAP Business ByDesign with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Business ByDesign.
You can enable your users to be automatically signed-in to SAP Business ByDesign (Single Sign-On) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.
Prerequisites
To configure Azure AD integration with SAP Business ByDesign, you need the following items:
An Azure AD subscription. If you don't have an Azure AD environment, you can get a free account
SAP Business ByDesign single sign-on enabled subscription
Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP Business ByDesign supports SP initiated SSO
2. Navigate to Enterprise Applications and then select the All Applications option.
3. To add new application, click New application button on the top of dialog.
4. In the search box, type SAP Business ByDesign , select SAP Business ByDesign from result panel then
click Add button to add the application.
2. On the Select a Single sign-on method dialog, select SAML/WS-Fed mode to enable single sign-on.
3. On the Set up Single Sign-On with SAML page, click Edit icon to open Basic SAML Configuration
dialog.
b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
https://<servername>.sapbydesign.com
NOTE
These values are not real. Update these values with the actual Sign on URL and Identifier. Contact SAP Business
ByDesign Client support team to get these values. You can also refer to the patterns shown in the Basic SAML
Configuration section in the Azure portal.
5. SAP Business ByDesign application expects the SAML assertions in a specific format. Configure the following
claims for this application. You can manage the values of these attributes from the User Attributes section
on application integration page. On the Set up Single Sign-On with SAML page, click Edit button to
open User Attributes dialog.
a. Login URL
b. Azure AD Identifier
c. Logout URL
Configure SAP Business ByDesign Single Sign-On
1. Sign on to your SAP Business ByDesign portal with administrator rights.
2. Navigate to Application and User Management Common Task and click the Identity Provider tab.
3. Click New Identity Provider and select the metadata XML file that you have downloaded from the Azure
portal. By importing the metadata, the system automatically uploads the required signature certificate and
encryption certificate.
4. To include the Asser tion Consumer Ser vice URL into the SAML request, select Include Asser tion
Consumer Ser vice URL .
5. Click Activate Single Sign-On .
6. Save your changes.
7. Click the My System tab.
8. In the Azure AD Sign On URL textbox, paste Login URL value, which you have copied from the Azure
portal.
9. Specify whether the employee can manually choose between logging on with user ID and password or SSO
by selecting Manual Identity Provider Selection .
10. In the SSO URL section, specify the URL that should be used by the employee to signon to the system. In
the URL Sent to Employee dropdown list, you can choose between the following options:
Non-SSO URL
The system sends only the normal system URL to the employee. The employee cannot signon using SSO,
and must use password or certificate instead.
SSO URL
The system sends only the SSO URL to the employee. The employee can signon using SSO. Authentication
request is redirected through the IdP.
Automatic Selection
If SSO is not active, the system sends the normal system URL to the employee. If SSO is active, the system
checks whether the employee has a password. If a password is available, both SSO URL and Non-SSO URL
are sent to the employee. However, if the employee has no password, only the SSO URL is sent to the
employee.
11. Save your changes.
Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
1. In the Azure portal, in the left pane, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
4. Click the Add user button, then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog select Britta Simon in the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting any role value in the SAML assertion then in the Select Role dialog select the
appropriate role for the user from the list, then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog click the Assign button.
Create SAP Business ByDesign test user
In this section, you create a user called Britta Simon in SAP Business ByDesign. Please work with SAP Business
ByDesign Client support team to add the users in the SAP Business ByDesign platform.
NOTE
Please make sure that NameID value should match with the username field in the SAP Business ByDesign platform.
Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Business ByDesign tile in the Access Panel, you should be automatically signed in to the
SAP Business ByDesign for which you set up SSO. For more information about the Access Panel, see Introduction to
the Access Panel.
Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
HANA
11/2/2020 • 8 minutes to read • Edit Online
In this tutorial, you learn how to integrate SAP HANA with Azure Active Directory (Azure AD). Integrating SAP
HANA with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP HANA.
You can enable your users to be automatically signed-in to SAP HANA (Single Sign-On) with their Azure AD
accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.
Prerequisites
To configure Azure AD integration with SAP HANA, you need the following items:
An Azure AD subscription
A SAP HANA subscription that's single sign-on (SSO) enabled
A HANA instance that's running on any public IaaS, on-premises, Azure VM, or SAP large instances in Azure
The XSA Administration web interface, as well as HANA Studio installed on the HANA instance
NOTE
We do not recommend using a production environment of SAP HANA to test the steps in this tutorial. Test the integration
first in the development or staging environment of the application, and then use the production environment.
Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP HANA supports IDP initiated SSO
SAP HANA supports just-in-time user provisioning
3. To add new application, click New application button on the top of dialog.
4. In the search box, type SAP HANA , select SAP HANA from result panel then click Add button to add the
application.
2. On the Select a Single sign-on method dialog, select SAML/WS-Fed mode to enable single sign-on.
3. On the Set up Single Sign-On with SAML page, click Edit icon to open Basic SAML Configuration
dialog.
4. On the Set up Single Sign-On with SAML page, perform the following steps:
b. In the Reply URL text box, type a URL using the following pattern:
https://<Customer-SAP-instance-url>/sap/hana/xs/saml/login.xscfunc
NOTE
These values are not real. Update these values with the actual Identifier and Reply URL. Contact SAP HANA Client
support team to get these values. You can also refer to the patterns shown in the Basic SAML Configuration
section in the Azure portal.
5. SAP HANA application expects the SAML assertions in a specific format. Configure the following claims for
this application. You can manage the values of these attributes from the User Attributes section on
application integration page. On the Set up Single Sign-On with SAML page, click Edit button to open
User Attributes dialog.
6. In the User attributes section on the User Attributes & Claims dialog, perform the following steps:
a. Click Edit icon to open the Manage user claims dialog.
b. From the Transformation list, select ExtractMailPrefix() .
c. From the Parameter 1 list, select user.mail .
d. Click Save .
7. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Federation Metadata XML from the given options as per your requirement
and save it on your computer.
2. In the XSA Web Interface, go to SAML Identity Provider . From there, select the + button on the bottom of
the screen to display the Add Identity Provider Info pane. Then take the following steps:
a. In the Add Identity Provider Info pane, paste the contents of the Metadata XML (which you
downloaded from the Azure portal) into the Metadata box.
b. If the contents of the XML document are valid, the parsing process extracts the information that's required
for the Subject, Entity ID, and Issuer fields in the General data screen area. It also extracts the
information that's necessary for the URL fields in the Destination screen area, for example, the Base URL
and SingleSignOn URL (*) fields.
c. In the Name box of the General Data screen area, enter a name for the new SAML SSO identity provider.
NOTE
The name of the SAML IDP is mandatory and must be unique. It appears in the list of available SAML IDPs that is
displayed when you select SAML as the authentication method for SAP HANA XS applications to use. For example,
you can do this in the Authentication screen area of the XS Artifact Administration tool.
3. Select Save to save the details of the SAML identity provider and to add the new SAML IDP to the list of
known SAML IDPs.
4. In HANA Studio, within the system properties of the Configuration tab, filter the settings by saml . Then
adjust the asser tion_timeout from 10 sec to 120 sec .
Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
1. In the Azure portal, in the left pane, select Azure Active Director y , select Users , and then select All users .
4. Click the Add user button, then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog select Britta Simon in the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting any role value in the SAML assertion then in the Select Role dialog select the
appropriate role for the user from the list, then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog click the Assign button.
Create SAP HANA test user
To enable Azure AD users to sign in to SAP HANA, you must provision them in SAP HANA. SAP HANA supports
just-in-time provisioning , which is by enabled by default.
If you need to create a user manually, take the following steps:
NOTE
You can change the external authentication that the user uses. They can authenticate with an external system such as
Kerberos. For detailed information about external identities, contact your domain administrator.
1. Open the SAP HANA Studio as an administrator, and then enable the DB-User for SAML SSO.
2. Select the invisible check box to the left of SAML , and then select the Configure link.
3. Select Add to add the SAML IDP. Select the appropriate SAML IDP, and then select OK .
4. Add the External Identity (in this case, BrittaSimon) or choose Any . Then select OK .
NOTE
If the Any check box is not selected, then the user name in HANA needs to exactly match the name of the user in the
UPN before the domain suffix. (For example, [email protected] becomes BrittaSimon in HANA.)
Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Cloud for Customer
11/2/2020 • 6 minutes to read • Edit Online
In this tutorial, you'll learn how to integrate SAP Cloud for Customer with Azure Active Directory (Azure AD). When
you integrate SAP Cloud for Customer with Azure AD, you can:
Control in Azure AD who has access to SAP Cloud for Customer.
Enable your users to be automatically signed-in to SAP Cloud for Customer with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.
Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Cloud for Customer single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Cloud for Customer supports SP initiated SSO
Configure and test Azure AD single sign-on for SAP Cloud for Customer
Configure and test Azure AD SSO with SAP Cloud for Customer using a test user called B.Simon . For SSO to work,
you need to establish a link relationship between an Azure AD user and the related user in SAP Cloud for Customer.
To configure and test Azure AD SSO with SAP Cloud for Customer, complete the following building blocks:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP Cloud for Customer SSO - to configure the single sign-on settings on application side.
a. Create SAP Cloud for Customer test user - to have a counterpart of B.Simon in SAP Cloud for
Customer that is linked to the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.
4. On the Basic SAML Configuration section, enter the values for the following fields:
a. In the Sign on URL text box, type a URL using the following pattern:
https://<server name>.crm.ondemand.com
b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
https://<server name>.crm.ondemand.com
NOTE
These values are not real. Update these values with the actual Sign on URL and Identifier. Contact SAP Cloud for
Customer Client support team to get these values. You can also refer to the patterns shown in the Basic SAML
Configuration section in the Azure portal.
5. SAP Cloud for Customer application expects the SAML assertions in a specific format, which requires you to
add custom attribute mappings to your SAML token attributes configuration. The following screenshot
shows the list of default attributes. Click Edit icon to open User Attributes dialog.
6. In the User Attributes section on the User Attributes & Claims dialog, perform the following steps:
a. Click Edit icon to open the Manage user claims dialog.
b. Select Transformation as source .
c. From the Transformation list, select ExtractMailPrefix() .
d. From the Parameter 1 list, select the user attribute you want to use for your implementation. For
example, if you want to use the EmployeeID as unique user identifier and you have stored the attribute value
in the ExtensionAttribute2, then select user.extensionattribute2.
e. Click Save .
7. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
8. On the Set up SAP Cloud for Customer section, copy the appropriate URL(s) based on your requirement.
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.
a. In the First Name text box, enter the name of user like B .
b. In the Last Name text box, enter the name of user like Simon .
c. In E-Mail text box, enter the email of user like [email protected] .
d. In the Login Name text box, enter the name of user like B.Simon .
e. Select User Type as per your requirement.
f. Select Account Activation option as per your requirement.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud for Customer tile in the Access Panel, you should be automatically signed in to the
SAP Cloud for Customer for which you set up SSO. For more information about the Access Panel, see Introduction
to the Access Panel.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SAP Cloud for Customer with Azure AD
Azure monitor for SAP solutions (preview)
12/22/2020 • 5 minutes to read • Edit Online
Overview
Azure Monitor for SAP Solutions is an Azure-native monitoring product for customers, running their SAP
landscapes on Azure. The product works with both SAP on Azure Virtual Machines and SAP on Azure Large
Instances. With Azure Monitor for SAP Solutions, customers can collect telemetry data from Azure infrastructure
and databases in one central location and visually correlate telemetry data for faster troubleshooting.
Azure Monitor for SAP Solutions is offered through Azure Marketplace. It provides a simple, intuitive setup
experience and takes only a few clicks to deploy the resource for Azure Monitor for SAP Solutions (known as SAP
monitor resource ).
Customers can monitor different components of an SAP landscape such as Azure Virtual Machines, High-
availability cluster, SAP HANA database and so on, by adding the corresponding provider for that component.
Supported infrastructure:
Azure Virtual Machine
Azure Large Instance
Supported databases:
SAP HANA Database
Microsoft SQL server
Azure Monitor for SAP Solutions leverages the power of existing Azure Monitor capabilities such as Log Analytics
and Workbooks to provide additional monitoring capabilities. Customers can create custom visualizations by
editing the default Workbooks provided by Azure Monitor for SAP Solutions, write custom queries and create
custom alerts by using Azure Log Analytics workspace, take advantage of flexible retention period and connect
monitoring data with their ticketing system.
Architecture overview
At a high level, the following diagram explains how Azure Monitor for SAP Solutions collects telemetry from SAP
HANA database. The architecture is agnostic to whether SAP HANA is deployed on Azure Virtual Machines or
Azure Large Instances.
NOTE
Customers are responsible for patching and maintaining the VM, deployed in the managed resource group.
TIP
Customers can choose to use an existing Log Analytics workspace for telemetry collection, if it is deployed within the same
Azure subscription as the resource for Azure Monitor for SAP Solutions.
Architecture Highlights
Following are the key highlights of the architecture:
Multi-instance - Customers can create monitor for multiple instances of a given component type (for
example, HANA DB, HA cluster, Microsoft SQL server) across multiple SAP SIDs within a VNET with a single
resource of Azure Monitor for SAP Solutions.
Multi-provider - The above architecture diagram shows the SAP HANA provider as an example. Similarly,
customers can configure additional providers for corresponding components (for example, HANA DB, HA
cluster, Microsoft SQL server) to collect data from those components.
Open source - The source code of Azure Monitor for SAP Solutions is available in GitHub. Customers can refer
to the provider code and learn more about the product, contribute or share feedback.
Extensible quer y framework - SQL queries to collect telemetry data are written in JSON. Additional SQL
queries to collect more telemetry data can be easily added. Customers can request specific telemetry data to be
added to Azure Monitor for SAP Solutions, by leaving feedback through link in the end of this document or
contacting their account team.
Pricing
Azure Monitor for SAP Solutions is a free product (no license fee). Customers are responsible for paying the cost
for the underlying components in the managed resource group.
Next steps
Learn about providers and create your first Azure Monitor for SAP Solutions resource.
Learn more about Providers
Deploy Azure Monitor for SAP solutions with Azure PowerShell
Do you have questions about Azure Monitor for SAP Solutions? Check the FAQ section
Azure monitor for SAP solutions providers (preview)
12/22/2020 • 4 minutes to read • Edit Online
Overview
In the context of Azure Monitor for SAP Solutions, a provider type refers to a specific provider. For example SAP
HANA, which is configured for a specific component within the SAP landscape, like SAP HANA database. A provider
contains the connection information for the corresponding component and helps to collect telemetry data from
that component. One Azure Monitor for SAP Solutions resource (also known as SAP monitor resource) can be
configured with multiple providers of the same provider type or multiple providers of multiple provider types.
Customers can choose to configure different provider types to enable data collection from corresponding
component in their SAP landscape. For Example, customers can configure one provider for SAP HANA provider
type, another provider for High-availability cluster provider type and so on.
Customers can also choose to configure multiple providers of a specific provider type to reuse the same SAP
monitor resource and associated managed group. Lean more about managed resource group. For public preview,
the following provider types are supported:
SAP HANA
High-availability cluster
Microsoft SQL Server
Customers are recommended to configure at least one provider from the available provider types at the time of
deploying the SAP Monitor resource. By configuring a provider, customers initiate data collection from the
corresponding component for which the provider is configured.
If customers don't configure any providers at the time of deploying SAP monitor resource, although the SAP
monitor resource will be successfully deployed, no telemetry data will be collected. Customers have an option to
add providers after deployment through SAP monitor resource within Azure portal. Customers can add or delete
providers from the SAP monitor resource at any time.
TIP
If you would like Microsoft to implement a specific provider, please leave feedback through link at the end of this document
or reach out your account team.
Next steps
Create your first Azure Monitor for SAP solutions resource.
Do you have questions about Azure Monitor for SAP Solutions? Check the FAQ section
Deploy Azure Monitor for SAP Solutions with Azure
portal
12/22/2020 • 2 minutes to read • Edit Online
Azure Monitor for SAP Solutions resources can be created through the Azure portal. This method provides a
browser-based user interface to deploy Azure Monitor for SAP Solutions and configure providers.
2. In the Basics tab, provide the required values. If applicable, you can use an existing Log Analytics workspace.
3. When selecting a virtual network, ensure that the systems you want to monitor are reachable from within
that VNET.
IMPORTANT
Selecting Share for Data sharing with Microsoft enables our support teams to provide additional support.
Configure providers
SAP HANA provider
1. Select the Provider tab to add the providers you want to configure. You can add multiple providers one
after another or add them after deploying the monitoring resource.
2. Select Add provider and choose SAP HANA from the drop down.
IMPORTANT
Ensure that SAP HANA provider is configured for SAP HANA 'master' node.
IMPORTANT
To configure the High-availability cluster (Pacemaker) provider, ensure that ha_cluster_provider is installed in each
node. For more information see HA cluster exporter
2. Select Add provider and choose Microsoft SQL Ser ver from the drop down.
3. Fill out the fields using information associated with your Microsoft SQL Server.
4. When finished, select Add provider . Continue to add additional providers as needed or select Review +
create to complete the deployment.
Next steps
Learn more about Azure Monitor for SAP Solutions
Quickstart: Deploy Azure Monitor for SAP Solutions
with Azure PowerShell
12/22/2020 • 3 minutes to read • Edit Online
This article describes how you can create Azure Monitor for SAP Solutions resources using the Az.HanaOnAzure
PowerShell module.
Cau t i on
Azure Monitor for SAP Solutions is currently in public preview. This preview version is provided without a service
level agreement. It's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure
Previews.
Requirements
If you don't have an Azure subscription, create a free account before you begin.
If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect
to your Azure account using the Connect-AzAccount cmdlet. For more information about installing the Az
PowerShell module, see Install Azure PowerShell. If you choose to use Cloud Shell, see Overview of Azure Cloud
Shell for more information.
IMPORTANT
While the Az.HanaOnAzure PowerShell module is in preview, you must install it separately using the Install-Module
cmdlet. Once this PowerShell module becomes generally available, it becomes part of future Az PowerShell module releases
and available natively from within Azure Cloud Shell.
If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be
billed. Select a specific subscription using the Set-AzContext cmdlet.
SAP monitor
To create an SAP monitor, you use the New-AzSapMonitor cmdlet. The following example creates a SAP monitor for
the specified subscription, resource group, and resource name.
$SapMonitorParams = @{
Name = 'ps-sapmonitor-t01'
ResourceGroupName = 'myResourceGroup'
Location = 'westus2'
EnableCustomerAnalytic = $true
MonitorSubnet = '/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/vnet-
sap/subnets/mysubnet'
LogAnalyticsWorkspaceSharedKey = $WorkspaceKey.PrimarySharedKey
LogAnalyticsWorkspaceId = $Workspace.CustomerId
LogAnalyticsWorkspaceResourceId = $Workspace.ResourceId
}
New-AzSapMonitor @SapMonitorParams
To retrieve the properties of a SAP monitor, you use the Get-AzSapMonitor cmdlet. The following example gets
properties of a SAP monitor for the specified subscription, resource group, and resource name.
Provider instance
To create a provider instance, you use the New-AzSapMonitorProviderInstance cmdlet. The following example
creates a provider instance for the specified subscription, resource group, and resource name.
$SapProviderParams = @{
ResourceGroupName = 'myResourceGroup'
Name = 'ps-sapmonitorins-t01'
SapMonitorName = 'yemingmonitor'
ProviderType = 'SapHana'
HanaHostname = 'hdb1-0'
HanaDatabaseName = 'SYSTEMDB'
HanaDatabaseSqlPort = '30015'
HanaDatabaseUsername = 'SYSTEM'
HanaDatabasePassword = (ConvertTo-SecureString 'Manager1' -AsPlainText -Force)
}
New-AzSapMonitorProviderInstance @SapProviderParams
To retrieve properties of a provider instance, you use the Get-AzSapMonitorProviderInstance cmdlet. The following
example gets properties of a provider instance for the specified subscription, resource group, SapMonitor name,
and resource name.
Clean up resources
If the resources created in this article aren't needed, you can delete them by running the following examples.
Delete the provider instance
To remove a provider instance, you use the Remove-AzSapMonitorProviderInstance cmdlet. The following example
deletes a provider instance for the specified subscription, resource group, SapMonitor name, and resource name.
The following example deletes the specified resource group and all resources contained within it. If resources
outside the scope of this article exist in the specified resource group, they will also be deleted.
Next steps
Learn more about Azure Monitor for SAP Solutions.
Azure Monitor for SAP solutions FAQ (preview)
12/22/2020 • 2 minutes to read • Edit Online
Next steps
Create your first Azure Monitor for SAP solutions resource.