0% found this document useful (0 votes)
41 views25 pages

Installation of SAP Systems

Uploaded by

oleg.g1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views25 pages

Installation of SAP Systems

Uploaded by

oleg.g1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

7/12/2023

Installation of SAP Systems Based on the


Application Server Java of SAP
NetWeaver 7.5 and SAP Solution Manager
7.2 SR2 Java on Windows: SAP HANA
Database
Generated on: 2023-07-12 07:04:14 GMT+0000

software logistics toolset | Software Logistics Toolset 1.0, Current Version

PUBLIC

Original content: https://help.sap.com/docs/SLTOOLSET/14ccd8beec6f4651905783bc469bce5d?locale=en-


US&state=PRODUCTION&version=CURRENT_VERSION

Warning

This document has been generated from the SAP Help Portal and is an incomplete version of the official SAP product
documentation. The information included in custom documentation may not re ect the arrangement of topics in the SAP Help
Portal, and may be missing important aspects and/or correlations to other topics. For this reason, it is not for productive use.

For more information, please visit the https://help.sap.com/docs/disclaimer.

This is custom documentation. For more information, please visit the SAP Help Portal 1
7/12/2023

High Availability with Microsoft Failover Clustering


You can install a high-availability SAP system with Microsoft Failover Clustering. The Failover Clustering software improves the
availability of the system and protects it against failure and unplanned downtime, enabling 24-hour operation, 365 days a year.

With high availability, you enable critical system components, known as “Single Points of Failure (SPOFs)”, to be automatically
switched from one machine to the other, if hardware or software problems arise on one machine. With the help of this
switchover – or failover – the system can continue functioning.

Apart from enabling failover when hardware problems occur, you can also use Failover Clustering to avoid downtime when you
perform essential system maintenance. If you need to maintain one host (failover cluster node), you can deliberately switch the
cluster resources to the other host (failover cluster node) and temporarily operate it there while maintenance is in progress.
When maintenance work is nished, you can easily move the resources back to their original node and continue operating them
there.

When you are setting up the SAP system with Microsoft Failover Clustering, you combine standard installation steps, described
earlier in this documentation, with cluster-speci c steps, described here.

You have the following options to install a high-availability SAP system with Microsoft Failover Clustering:

You install the SAP related parts (for example: SCS instance, additional standalone Gateways, Web Dispatcher instance,
etc.) in one Microsoft Failover Cluster.

You install the SAP related parts (for example: SCS instance, additional standalone Gateways, Web Dispatcher instance,
etc.) in two Microsoft Failover Clusters.

You have the following options to install a Microsoft Failover Cluster:

CSD (Cluster Shared Disks)

A Failover Cluster which contains shared disks.

A database can be optionally installed in this Cluster in its own cluster group.

FSC (File Share Cluster)

A Failover Cluster which does not contain shared disks and uses a remote le share instead.

A database cannot be installed in this cluster because databases need shared disks. One exception: MS SQL
using “AlwaysOn” option.

 Note
The user starting the software provisioning manager must have full access rights on the le share \\
<sapglobalhost>\sapmnt.

This is custom documentation. For more information, please visit the SAP Help Portal 2
7/12/2023

Landscape of a Cluster using Shared Disks

Landscape of a File Share Cluster

You have the following options to install the database instance with a high-availability SAP system:

You install the database instance on a different host or cluster on either the same or a different operating system.

You use third-party high-availability solutions to improve the availability of your database instance.

Important Information
To install a new SAP system with Microsoft Failover Clustering, you have to perform a number of extra steps specially required
for the cluster and con gure the SAP system so that it can take advantage of the cluster functionality:

Since the correct con guration of network addresses is absolutely essential for the cluster to function properly, you have
to perform a number of additional steps that are necessary to set up and check address resolution.

Since the cluster hardware has at least two nodes that have access to all local and shared storage devices, you have to
install some components on all nodes and pay attention to special rules for distributing components to local disks,
shared disks, or external le shares.

You have to install and con gure the SCS instance to run on two cluster nodes in one Microsoft Failover Cluster.

 Note

This is custom documentation. For more information, please visit the SAP Help Portal 3
7/12/2023
If you have an existing SAP system and plan to migrate to a failover cluster with new hardware, you install the SAP system
using a system copy.

For more information about the system copy, see the System Copy Guide for your SAP system at:

http://support.sap.com/sltoolset System Provisioning System Copy Option

The system copy guide does not include the cluster-speci c information, which is described here.

Terminology
In this documentation the hosts in a Microsoft Failover Cluster are referred to as rst cluster node and additional cluster
node(s):

The rst cluster node is the cluster node where you perform the general installation of an SAP system, for
example where the database or SCS instance is to be installed.

The additional cluster node is the node where you con gure the already installed SAP instances to run in
Microsoft Failover Clustering.

Checklist for a High-Availability System


This section includes the steps that you have to perform for your SAP system using Microsoft Failover Clustering. Detailed
information about the steps is available in the relevant section.

Planning
1. You check that you have completed the same planning activities as for a non-HA system, including the hardware and
software requirements.

2. You decide how to set up your SAP system components in an HA con guration.

3. You decide how to distribute SAP system components to disks for HA.

4. You read Directories in an HA Con guration.

5. You read IP Addresses in an HA Con guration.

6. You obtain IP addresses for HA.

 Note
The user starting the software provisioning manager must have full access rights on the le share \\
<sapglobalhost>\sapmnt.

Preparation
1. You check that you have completed the same preparations as for a non-HA system.

2. To make sure that all preparation steps have been correctly performed, check that the storage resources are available
to all cluster nodes. If you want to run the CSD option, check if you can move the disk resources from one cluster node to
another so that they are accessible from a single node at any time. If you want to run the FSC option, check if the
external le share is accessible by your installation user from all cluster nodes.

This is custom documentation. For more information, please visit the SAP Help Portal 4
7/12/2023

Installation
1. You make sure that:

a. You are logged on as a domain administrator user or a domain user, who has the necessary rights on all cluster
nodes. For a list of the required permissions, see Performing a Domain Installation without being a Domain
Administrator.

 Note
In Failover Cluster con gurations, make sure that the account of the cluster (<clustername>$) has full rights
in the OU (Organizational Unit) on which your Domain administrator con gures the SAP users and the SAP
group.

If these rights are missing, the software provisioning manager will try to add the cluster network name
resource to the SAP cluster group. However, because the cluster itself has no rights to add the related
computer object (CNO) to the OU, the software provisioning manager will stop and show the error message
<access denied>.

b. You do not use the user <sapsid>adm unless speci ed.

c. If you are prompted during the installation process, log off and log on again.

2. You con gure the rst cluster node.

3. You run the software provisioning manager on the rst cluster node to install the database instance.

4. You con gure the additional cluster node.

5. You install the primary application server instance.

6. You install at least one additional application server instance.

Post-Installation
1. You install the permanent SAP licenses on all cluster nodes.

2. You perform the post-installation checks for the enqueue replication server.

3. You perform the same post-installation steps as for a non-HA system.

Additional Information
Moving Cluster Groups, or Services and Applications, or Roles

Starting and Stopping the SAP System in a HA Con guration.

Planning
The following sections provide information about how to plan the installation of the SAP system for Microsoft Failover
Clustering. For a complete list of all steps, see section Planning in the Installation Checklist for a High-Availability System.

System Con guration with Microsoft Failover Clustering


The following chapters provide information about the con guration of your SAP system with Microsoft Failover Clustering. It
describes the components you have to install for an SAP system running in a Microsoft Failover Cluster, and how to distribute

This is custom documentation. For more information, please visit the SAP Help Portal 5
7/12/2023
them on the speci c host. For more information, see:

SAP System Components in a Microsoft Failover Cluster

Enqueue Replication Server in a Microsoft Failover Cluster

SAP System Components in a Microsoft Failover Cluster


In a Microsoft Failover Cluster con guration, you have the following mandatory components for your SAP system:

SAP System Components in an Failover Cluster Configuration

Component Number of Components per SAP System Single Point of Failure

SCS instance (message services and 1 yes


enqueue services)

Application server instance (primary 1-<n> no


application server, additional application
server)

To protect the SPOFs (SCS instance and database instance), you have to use Microsoft Failover Clustering.

If a hardware or software problem occurs on the rst cluster node, the clustered SCS instance automatically fails over to
another node.

If you need to maintain the cluster node where the SCS instance is running, you can switch this instance to another node.
When maintenance work is nished, you move the SCS instance back to the original node.

To protect system components that are non-SPOFs, for example application servers, you have to install them as multiple
components. In this case, you must install at least two application servers (the primary application server instance and
one additional application server instance) on two different hosts. You have the following options:

You install the primary application server and the additional application server instance on the cluster nodes of a
Microsoft Failover Cluster. You install them on a local disk or external le share. Any additional application server
instances are installed on hosts outside of the Microsoft failover cluster.

If you have to maintain a cluster node, you have to stop the primary application server or the additional
application server instance on that node. When you have nished maintenance, you restart the instances.

 Note
If you install the primary application server and the additional application server instance on the cluster nodes,
you must perform the hardware sizing for the failover cluster host, as in this case the application server is
always running on this host. This increases system load and might impact performance.

Note that, as usual in a failover cluster setup, the SCS instance also switch to run on the failover cluster host
in the event of failover, which temporarily also increases system load.

You install the primary application server and all additional application server instances on hosts, which are not
part of a Microsoft Failover Cluster.

SAP System Components in One Microsoft Failover Cluster


The following gures show examples for the installation of SPOFs and non-SPOFs of an SAP system in one Microsoft Failover
Cluster with two nodes.

This is custom documentation. For more information, please visit the SAP Help Portal 6
7/12/2023
The rst gure shows an Microsoft Failover Cluster con guration where the non-SPOFs components (primary application server
instance, additional application server instance) are installed locally on the cluster nodes. Any additional application server
instances are installed outside the Microsoft Failover Cluster on separate hosts.

Java System

The following gure shows an HA con guration, where the non-SPOFs components (primary application server instance,
additional application server instance) are installed on separate hosts that are not part of the failover cluster.

This is custom documentation. For more information, please visit the SAP Help Portal 7
7/12/2023

Java System

Multiple SAP Systems In One Microsoft Failover Cluster


Before SAP NetWeaver 7.0, SAP only supported the installation of one clustered SAP system in one Microsoft Failover Cluster
with two cluster nodes. The reason was that the cluster share sapmnt resource could only be assigned to one cluster group and
could only point to one shared drive.

The solution was to rename the cluster share sapmnt resource into sapmnt<SAPSID>, and use junctions, which pointed to the
local disk. This is no longer required.

 Caution
All local instances such as an enqueue replication server, primary or additional application server and the local part of the
SCS when you use a le share cluster are installed on the local disk where the saploc share is pointing to. Make sure that
you have enough space on this local disk.

Every SAP system is placed in a separate cluster group with the unique name SAP <SAPSID>. Each SAP cluster group has its
own IP address, network name, as well as the SAP service resource (or generic service resource), and the SAP instance
resource. If you use the CSD option, the cluster group also contains a shared disk and a sapmnt share. In case of the FSC option,
the group does not contain a shared drive and the sapmnt share is located on a le share.

If you have an HA con guration with three or more cluster nodes, the following restrictions apply:

The SCS instance must be con gured to be able to perform a fail over between two cluster nodes in one Microsoft
Failover Cluster.
This is custom documentation. For more information, please visit the SAP Help Portal 8
7/12/2023
For more information, see SAP Note 1634991 .

If the database supports the installation on several cluster nodes, the database instance can be installed on more than
two cluster nodes in one Microsoft Failover Cluster.

The following gure shows the installation of multiple SAP systems in one Microsoft Failover Cluster. For each SAP system you
have to install one primary and at least one additional application server.

Multiple SAP Systems in one Microsoft Failover Cluster

Multiple SAP Systems In Multiple Microsoft Failover Clusters


Besides installing multiple SAP systems in one Microsoft Failover Cluster, you can also install multiple SAP systems in several
Microsoft Failover Clusters with two or more cluster nodes.

 Note
As of Windows Server 2012, the Microsoft Failover Clustering software supports up to 64 cluster nodes.

For this failover cluster con guration, the following restrictions apply:

The SCS instance must be con gured to run on two cluster nodes in one Microsoft Failover Cluster.

For more information, see SAP Note 1634991 .

This is custom documentation. For more information, please visit the SAP Help Portal 9
7/12/2023
If the database supports the installation on several cluster nodes, the database instance can be installed on more than
two cluster nodes in one Microsoft Failover Cluster.

The following gure shows the installation of multiple SAP systems in two Microsoft Failover Clusters with three cluster nodes,
called Node A, B, and C. In this example, the SCS instances are installed in the rst Microsoft Failover Cluster, and the database
instances for the two SAP systems are installed on the second Microsoft Failover Cluster. The application servers can be either
installed on a local disk on the cluster nodes or outside the Microsoft Failover Cluster on separate hosts.

 Note
If you use an enqueue replication server, you must con gure the enqueue replication server, and the SCS instance on two
nodes.

For more information, see SAP Note 1634991 .

Multiple SAP Systems in Two Microsoft Failover Clusters

Enqueue Replication Server in a Microsoft Failover Cluster


The enqueue replication server contains a replica of the lock table (replication table) and is an essential component in a high-
availability setup. It is installed on the two cluster nodes where the SCS instance is installed and con gured to run, even if you
have more than two cluster nodes.

In normal operation the enqueue replication server is always active on the host where the SCS instance is not running.

This is custom documentation. For more information, please visit the SAP Help Portal 10
7/12/2023
If an enqueue server in a Microsoft Failover Cluster with two nodes fails on the rst cluster node, the enqueue server on the
additional cluster node is started. It retrieves the data from the replication table on that node and writes it in its lock table. The
enqueue replication server on the rst cluster node then becomes inactive. If the rst cluster node is available again, the
enqueue replication server on the second cluster node becomes active again.

The following gure shows the enqueue replication server mechanism in an Microsoft failover cluster con guration with two
nodes:

Enqueue Replication Server Mechanism on One Microsoft Failover Cluster with Two Nodes

Distribution of SAP System Components to Disks for Failover


Clustering
When planning the Microsoft Failover Cluster installation, keep in mind that the cluster hardware uses different storage
resources:

Local Resources

Local disks that are connected directly to the cluster nodes

Shared Storage Resources

Shared disks that can be accessed by all cluster nodes via a shared interconnect if CSD option is used

 Note
Shared disk is a synonym for the cluster Resource type Physical disk.

An external le share if the FSC option is used

You need to install the SAP system components in both the following ways:

Separately on all cluster nodes to use the local storage on each node

You have two options to distribute the shared les which are used by all cluster nodes:

You install the following on different shared disks:

SCS instance
This is custom documentation. For more information, please visit the SAP Help Portal 11
7/12/2023
Single quorum device, if used

On an external le share that is made accessible to all cluster nodes:

All database les are installed on an external host, or an additional cluster in this scenario

If a quorum is used, it is con gured as a le share quorum on the le share host

Distribution of SAP System Components for an SAP System in a Failover Cluster with an External File Share (FSC)

Quorum Con gurations on Windows


On Windows, there are several quorum con gurations available. The con guration to use mainly depends on the cluster setup,
such as the number of cluster nodes, the storage type (single or distributed), the distribution to shared disk and le share, and
the number of data centers. For more information, see the Windows documentation.

If the number of cluster nodes is odd, you need no quorum. For a cluster with an even number of nodes you can con gure a disk
quorum, a le share quorum, or a cloud quorum.

The default quorum con guration is called Node and Disk Majority for clusters with more than two nodes.

With a quorum con guration, each node and the witness maintain its own copy of the cluster con guration data. This ensures
that the cluster con guration is kept running even if the active node fails or is offline.

 Caution
If you do not use the default quorum con guration for your operating system, contact your hardware partner, who can help
you to analyze your needs and set up your cluster model. SAP supports these con gurations if they are part of a cluster
solution offered by your Original Equipment Manufacturer (OEM), or Independent Hardware Vendor (IHV).

Geographically Dispersed Cluster (Geospan)


The standard cluster con guration consists of two cluster nodes and a shared storage with all technical components located in
the same data center. In a geographically dispersed cluster, also known as a geospan cluster, the cluster nodes are distributed
across at least two data centers to avoid the full outage of a data center in the event of disaster.

This is custom documentation. For more information, please visit the SAP Help Portal 12
7/12/2023
A geospan con guration requires a more sophisticated storage architecture since a standard shared storage can only be
located in one data center and might therefore be a single point of failure (SPOF). To prevent the disk storage becoming a
SPOF, you have to con gure the storage system in each data center and to replicate its content to the storage system of the
other data center.

Replication can either be synchronous or asynchronous, depending on the:

Functionality of the storage subsystem

Acceptable amount of data loss during a failover

Physical layout of the storage area network

This includes the distance between the storage systems, signal latency, capacity, and speed of the network connection.

Customer budget

Directories in a Microsoft Failover Cluster Con guration


The following tables show the directories where the main software components for a high-availability system are stored:

Directories on Local Disks on Cluster Nodes

Component Default Directory

A supported operating system %windir%

Microsoft Failover Clustering software %windir%\Cluster

Only if FSC option is used: SCS instance <Local_Drive>:\usr\sap\<SAPSID>\SCS<Instance_Number>

Application server <Local_Drive>:\usr\sap\<SAPSID>\<Instance>

Enqueue replication server <Local_Drive>:\usr\sap\<SAPSID>\ERS<Instance_Number>

Diagnostics Agent (optional) <Local_Drive>:\usr\sap\<DASID>\SMDA<Instance_Number>

SAP Host Agent %Program Files%\SAP\hostctrl

Directories on Shared Disks

Component Default Directory

Cluster quorum resource (if used) <Drive>:\Cluster

SAP global and instance directories <Drive>:\usr\sap ...

Hostnames in a Failover Cluster Con guration


A part of the installation process that is unique to Microsoft Failover Clustering is the con guration of host names and IP
addresses in the network. This is a particularly important task because the addressing plays a key role in the switchover
procedure. Addressing must be set up correctly so that the system can take advantage of the cluster functionality and switch
between nodes when hardware problems arise.

This section explains the different types of IP addresses and their function in the switchover mechanism of one Microsoft
Failover Cluster with two cluster nodes.

This is custom documentation. For more information, please visit the SAP Help Portal 13
7/12/2023

Types of IP Addresses
In a proper con gured cluster with at least two nodes, there are at least seven IP addresses and corresponding host names for
your SAP system. You have two IP addresses for each cluster node, one IP address for the cluster, one address for the SAP
cluster group and one for the database cluster group.

Some of the addresses are assigned to the network adapters (network interface card, NIC) whereas others are virtual IP
addresses that are assigned to the cluster groups.

Physical IP Addresses Assigned to Network Adapters


A Microsoft Failover Cluster con guration has at least two networks:

A public network that is used for the communication between the primary application server, additional application
servers, and the LAN.

A private network that is used internally for communication between the nodes of the cluster, also called heartbeat.

The following gure shows a Microsoft Failover Cluster with two nodes and illustrates the adapters required for the public and
private networks, and their corresponding physical IP addresses. A physical IP address, in contrast to a virtual one, is stationary
and permanently mapped to the same adapter.

Adapters and IP Addresses Required for Public and Private Networks in an Microsoft Failover Cluster with Two Nodes

Host Names Assigned to Network Adapters


Each of the physical IP addresses of the network adapters must have a corresponding host name. For example, on the left-hand
node in the gure above, you might assign the IP addresses of the public and private network adapters as follows:

IP Addresses and Host Names

Network Adapter IP Address Host Name

Adapter 1 (private network) 10.1.1.1 clusA_priv

Adapter 3 (heartbeat network) 192.168.1.1 clusA

 Caution

The IP address and host name of the public network adapter is also the IP address and name of the machine. In our
example, this means that the machine that is the cluster node on the left in the gure has the name clusA.

This is custom documentation. For more information, please visit the SAP Help Portal 14
7/12/2023
Do not confuse the host name with the computer name. Each node also has a computer name, which is usually the
same as the host name.

The computer name is displayed in the node column of the Failover Cluster Management. However, it is not required
for the TCP/IP communication in the cluster. When you con gure IP addresses and corresponding names, keep in
mind that it is the host names that are important for the cluster, not the computer names.

Virtual IP Addresses Assigned to Cluster Groups


After you have installed the SAP system and fully con gured the cluster, the critical system resources are bound together in
two different groups.

Each of these groups requires a virtual IP address and network name that is permanently mapped to the group and not to a
particular node. The advantage of this is that, whenever a group is moved between nodes, its IP address and network name
move together with the group.

An HA con guration has the following groups:

SAP cluster group for each clustered SAP system

Cluster group

The following gure illustrates how the virtual IP addresses of the SAP group can move from one node to the other during a
failover.

Failover of Virtual IP Addresses

Obtaining IP Addresses for a Microsoft Failover Cluster


Con guration
This chapter describes how to obtain the IP addresses for the network adapters (cards) that are required to install and run your
high-availability system.

Context
For a clustered system, you have to con gure IP addresses correctly. During the installation procedure you have to assign at
least seven IP addresses and host names. You normally obtain these names and addresses from the system administrator.

Procedure
This is custom documentation. For more information, please visit the SAP Help Portal 15
7/12/2023
Ask the system administrator to give you the addresses and host names listed in the tables below, which show an example for a
con guration with one Microsoft failover cluster with two nodes. You need to enter the addresses and host names later during
the installation process.

The column De ned During indicates at which stage of the installation of the operating system and the SAP system the
addresses are de ned in the system.

 Caution
Use the names exactly as speci ed by the system administrator.

 Note
In the following tables we are still using the terminology cluster group, and not the Windows Server 2012 (R2) terminology
Roles.

Physical IP Addresses

Component Example for Physical IP Example for Physical Purpose De ned During
Address Host Name

First cluster node: 10.1.1.1 clusA_priv Address for internode Windows installation
communication on the
adapter for heartbeat
heartbeat network
network

First cluster node: 129.20.5.1 clusA Address of the rst Windows installation
cluster node for
adapter for public
communication with
network
application servers and
LAN (this is the same as
the address of the rst
cluster node)

Additional cluster node: 10.1.1.2 clusB_priv Address for internode Windows installation
communication on the
adapter for heartbeat
heartbeat network
network

Additional cluster node: 129.20.5.2 clusB Address of the additional Windows installation
cluster node for
adapter for public
communication with
network
application servers and
LAN (this is the same as
the address of the
additional cluster node)

Virtual IP Addresses

Component Example for Virtual IP Example for Host Name Purpose De ned During
Address

Cluster group 129.20.5.3 clusgrp Virtual address and Failover cluster software
name of the cluster con guration
group. It identi es the
cluster and is used for
administration purposes.

This is custom documentation. For more information, please visit the SAP Help Portal 16
7/12/2023

Component Example for Virtual IP Example for Host Name Purpose De ned During
Address

Database cluster group 129.20.5.4 dbgrp Virtual address and Execution of HA-wizard
name for accessing the or database-speci c
group of database cluster scripts
resources, regardless of
the node it is running on

SAP cluster group 129.20.5.5 sapgrp Virtual address and Con guration of SAP
name for accessing the system for high
group of SAP resources, availability with the
regardless of the node it software provisioning
is running on manager on the rst
node

Preparation
This section provides information about how to prepare the installation of the SAP system for Microsoft Failover Clustering. For
a complete list of all steps, see section Preparation in the Installation Checklist for a High-Availability System.

1. You check that you have completed the same preparations as for a non-HA system.

2. To make sure that all preparation steps have been correctly performed, check that the storage resources are available
to all cluster nodes. If you want to run the CSD option, check if you can move the disk resources from one cluster node to
another so that they are accessible from a single node at any time. If you want to run the FSC option, check if the
external le share is accessible by your installation user from all cluster nodes.

Installation
The following sections provide information about how to install the SAP system in a high-availability environment. For a
complete list of all steps, see section Installation in the Installation Checklist for a High-Availability System.

You have the following options to install the database instance:

CSD (Cluster Shared Disk)

You use a high available database outside the cluster used for the SCS instance. This scenario requires a shared
disk for the SCS instance and requires an additional cluster used for the database which may also require shared
disks.

You install the database on a shared disk in the same cluster used for the SCS instance.

FSC (File Share Cluster)

You use a high available database outside the cluster used for the SCS instance. This scenario does not require
shared disks for the SCS instance and requires an additional cluster used for the database which may require
shared disks.

 Note
The user starting the software provisioning manager must have full access rights on the le share \\
<sapglobalhost>\sapmnt.

Con guring the First Cluster Node

This is custom documentation. For more information, please visit the SAP Help Portal 17
7/12/2023
At the beginning of the installation with software provisioning manager, you will be asked to choose between FSC and CSD
installation option. For more information, see Installation.

When you run the First Cluster Node option, the software provisioning manager:

Creates the saploc share, pointing to a local disk

Creates the sapmnt share, pointing to a local disk if the CSD option is used, or to the external le share if the FSC
option is used

Installs the central services instance (SCS) and prepares this host as the SAP global host

Creates the SAP cluster group and adds the SCS instance to the SAP cluster group

Installs the enqueue replication server instance (ERS instance) for the SCS instance

Installs the SAP Host Agent

 Caution
When you reboot during the conversion to Failover Clustering, resources fail over to the other cluster node. Therefore, after
each reboot you have to return the system to the state it was in before the reboot.

Prerequisites
You are logged on to the rst cluster node as domain administrator or as a domain user who has the required
administration rights. For a list of the required permissions, see Performing a Domain Installation without being a
Domain Administrator.

CSD: You must install the SCS instance on a shared disk, and the ERS instance and SAP Host Agent on a local disk.

FSC: You must install the SCS instance on a local disk, like ERS instance and SAP Host Agent.

 Note
If you are installing SAP NetWeaver 7.5 Process Integration (PI) system, it is mandatory to use different shared disks
for the SCS instance if you’re using a shared disk cluster. In case you use a File Share Cluster, you have to use
different sapmnt shares for both instances.

If you select the FSC option at the beginning of the installation, the global parts of a SAP system are stored on an
external le share. The SCS instance, the ERS instance, and SAP Host Agent are installed on a local disk.

Procedure
1. Run the software provisioning manager and on the Welcome screen, choose <Product> <Database> SAP Systems
<System> High-Availability System First Cluster Node .

 Note
If the software provisioning manager prompts you to log off from your system, log off and log on again.

2. Enter the required parameter values.

 Note

For more information about the input parameters, position the cursor on a parameter and press F1 in the
software provisioning manager.

This is custom documentation. For more information, please visit the SAP Help Portal 18
7/12/2023
If you have a Microsoft cluster con guration with more than two nodes in one cluster, apply SAP Note 1634991
.

More Information
Moving Cluster Groups, or Services and Applications, or Roles

Installing the Database Instance


This procedure describes how to install the database instance.

Prerequisites
The SAP cluster group is Online on the rst cluster node.

Procedure
Perform the following steps on the rst cluster node.

1. Run the software provisioning manager and on the Welcome screen, choose <Product> <Database> SAP Systems
<System> High-Availability System Database Instance .

2. Follow the instructions in the software provisioning manager dialogs and enter the required parameter values.

 Note
For more information about the input parameters, position the cursor on a parameter and press the F1 key in the software
provisioning manager.

Con guring the Additional Cluster Node

Prerequisites
You have already performed the First Cluster Node option.

Context
When you run the Additional Cluster Node option it:

Con gures the additional cluster node to run the SAP cluster group

Creates the saploc share, pointing to a local disk

If you chose the FSC option:

Installs the SCS instance

Installs the enqueue replication server instance (ERS) for the SCS instance

Installs the SAP Host Agent

This is custom documentation. For more information, please visit the SAP Help Portal 19
7/12/2023

 Caution
You must install the instances and SAP Host Agent on a local disk.

Procedure
1. Run the software provisioning manager and on the Welcome screen, choose <Product> <Database> SAP Systems
<System> High-Availability System Additional Cluster Node .

 Note
If the software provisioning manager prompts you to log off from your system, log off and log on again.

2. Enter the required parameter values.

 Note
For more information about the input parameters, position the cursor on the parameter and press F1 in the
software provisioning manager.

 Caution
Do not accept default values, as they may come from SAP systems that already exist on the cluster.

Related Information
Moving Cluster Groups, or Services and Applications, or Roles

Installing the Primary Application Server Instance

Use
You have the following options to install the primary application server instance:

You install the primary application server instance on a cluster node.

You install the primary application server instance on a host outside of Microsoft Failover Cluster.

Procedure
1. Run the software provisioning manager and on the Welcome screen, choose <Product> <Database> SAP Systems
<System> High-Availability System Primary Application Server Instance .

2. If the software provisioning manager prompts you to log off, choose OK and log on again.

3. Follow the instructions in the software provisioning manager dialogs and enter the required parameter values.

 Note

For more information about the input parameters, position the cursor on a parameter and press F1 in the
software provisioning manager.

If you install the primary application server instance on an cluster node, make sure that on the screen General
SAP System Parameters for the:

This is custom documentation. For more information, please visit the SAP Help Portal 20
7/12/2023
Pro le Directory, you use the UNC path (not the local path) of the SAPGLOBALHOST host name, for
example:, for example:

\\<SAPGLOBALHOST>\sapmnt\<SAPSID>\SYS\profile.

If CSD option is used, the virtual host name of the SCS instance is the same as the SAPGLOBALHOST
host name.

If FSC option is used the virtual host name of the SCS instance is different from the SAPGLOBALHOST
host name.

 Note
If you are installing a SAP NetWeaver 7.5 Process Integration (PI) system, make sure that the virtual
host names for the ASCS instance and the SCS instance are different.

Installation Drive, you choose the local disk where you want to install the primary application server
instance.

4. Check that the primary application server instance is running.

Installing the Additional Application Server Instance


You have to install at least one additional application server instance for Microsoft Failover Clustering.

You have the following options, to install the additional application server instance:

You install the additional application server instance on a cluster node.

You install the additional application server instance on a host outside of the failover cluster.

Procedure
1. Run the software provisioning manager and on the Welcome screen, choose <Product> <Database> SAP Systems
<System> High-Availability System Additional Application Server Instance .

2. If the software provisioning manager prompts you to log off, choose OK and log on again.

3. Follow the instructions in the software provisioning manager dialogs and enter the required parameter values.

 Note

For more information about the input parameters, position the cursor on a parameter and press F1 in the
software provisioning manager.

If you install the additional application server instance on an cluster node, make sure that on the screen
General SAP System Parameters for the:

Pro le Directory, you use the UNC path (not the local path) of the SAPGLOBALHOST host name, for
example:

\\<SAPGLOBALHOST>\sapmnt\<SAPSID>\SYS\profile.

If CSD option is used, the virtual host name of the SCS instance is the same as the SAPGLOBALHOST
host name.

If FSC option is used, the virtual host name of the SCS instance is different from the SAPGLOBALHOST
host name.

This is custom documentation. For more information, please visit the SAP Help Portal 21
7/12/2023
Installation Drive, you choose the local disk where you want to install the additional application server
instance.

Additional application server instance, you enter the same instance number as for the primary
application server.

4. When you have nished, change the instance pro le of the additional application server instance so that the number of
its work processes equals the number of work processes of the primary application server instance.

5. If required, install more additional application server instances outside of the failover cluster.

 Note
Make sure that on the screen General SAP System Parameters for the Pro le Directory, you use the UNC path of
the virtual SCS host name, for example:

\\<SAPGLOBALHOST>\sapmnt\<SAPSID>\SYS\profile.

In a HA-system, the virtual host name of the SCS instance is the same as the SAP global host name.

Post-Installation
To complete and check the installation of the SAP system for a high-availability con guration, you need to perform the following
steps:

1. You install the permanent SAP licenses on all cluster nodes.

2. After a new installation of a clustered SCS instance, make sure that you update the saprc.dll (part of the
NTCLUST.SAR) package in c:\windows\system32 as soon as possible. For more information, see SAP Note 1596496
.

3. For information about Rolling Kernel Switch on Windows Failover Clusters, see SAP Note 2199317 .

4. You perform the post-installation checks for the enqueue replication server.

For more information, see the SAP Library at:

SAP Release and SAP Library Quick Link SAP Library Path (Continued)

Application Help Function-Oriented View Application


SAP NetWeaver 7.3 Server Application Server Infrastructure Standalone Enqueue
Server Installing the Standalone Enqueue Server
http://help.sap.com/nw73
Replication Server: Check Installation
SAP NetWeaver 7.3 including Enhancement
Package 1

http://help.sap.com/nw731

Application Help Function-Oriented View Application


SAP NetWeaver 7.4 Server Application Server Infrastructure Components of SAP
NetWeaver Application Server Standalone Enqueue Server
http://help.sap.com/nw74
Installing the Standalone Enqueue Server Replication
SAP NetWeaver 7.5 Server: Check Installation

http://help.sap.com/nw75

5. If required, you perform the general post-installation steps listed in this guide.
This is custom documentation. For more information, please visit the SAP Help Portal 22
7/12/2023

Additional Information
The following sections provide additional information about:

Moving Cluster Groups, or Services and Applications, or Roles

Starting and Stopping the SAP System in a Microsoft Failover Cluster Con guration.

Moving Cluster Groups, or Services and Applications, or Roles

Use
When you reboot during the conversion to Microsoft Failover Clustering, cluster resources fail over to the other cluster node.
Therefore, you have to return the system to the state it was in before the reboot, and move the resources back to the original
node.

To move the database, or SCS from one cluster node to the other, you use the following:

To move the database, or ASCS from one cluster node to the other, you use either the Failover Cluster Manager tool or
PowerShell.

 Note
Microsoft changed the term “cluster groups” in the Failover Cluster Manager tool to Roles. If you use PowerShell, the term
“cluster group” is still used for all cluster operations.

Procedure
Moving Roles, or Services and Applications, or Groups

To move the roles or services and applications, proceed as follows:

1. To move a role, open PowerShell in elevated mode, and enter the following command:

move-clustergroup "<role name>"

2. Repeat these steps for each role that you want to move.

Moving Roles or Cluster Groups

To move the roles proceed as follows:

1. To move a role, open PowerShell in elevated mode, and enter the following command:

move-clustergroup -name "<role name>"

2. Repeat these steps for each role that you want to move. If you have more than 2 nodes in your cluster, you can
specify the speci c cluster node for the move:

move-clustergroup -name "<role name>" -Node "<cluster node name>" -Wait 0

This is custom documentation. For more information, please visit the SAP Help Portal 23
7/12/2023

Starting and Stopping the SAP System in a Microsoft Failover


Cluster Con guration
An SAP System in an HA con guration is typically con gured into two HA groups: one cluster resource group contains the
database resources, the other group contains the SAP SCS instance.

 Note
When starting a whole SAP system, you rst need to start the database instance and then the remaining SAP instances.

When stopping a whole SAP system, you rst need rst to stop all SAP instances and then the database instance.

With the SAP MMC, or SAPControl you can start and stop all SAP instances whether they are clustered or not, except the
database instance.

With certain HA administration tools (Cluster Administrator , Failover Cluster Manager , or PowerShell), you can only start or
stop clustered SAP instances, such as the SCS instance, or the database instance.

Procedure
Starting and Stopping a Complete System or a Single Instance with SAP MMC or SAPControl

With the SAP MMC, or the command line tool SAPControl, you can start or stop the complete SAP system or a single clustered
or non-clustered SAP instance, except the database instance.

To start or stop the database instance, you have to use the tools described in “Starting and Stopping the clustered SCS and
Database Instance”.

For more information about SAP MMC or SAPControl, see Starting and Stopping the SAP System.

 Note

To use SAP MMC or SAPControl for starting or stopping a clustered SAP instance, the "SAP <SAPSID>
<Instance_Number> Service" resource of the clustered instance must be online. Therefore, SAP recommends
keeping the "SAP <SAPSID> <Instance_Number> Service" cluster resource always online, and using the SAP
MMC or SAPControl to start or stop a clustered instance.

You can also start SAPControl in the PowerShell.

Starting and Stopping the clustered SCS and Database Instance

With certain HA administration tools, such as PowerShell, or Failover Cluster Manager , you can only start or stop clustered SAP
instances, such as the SCS instance or the database instance. For all other non-clustered instances, such as additional
application server instances or the primary application server instance, you must use the SAP MMC or SAPControl.

Using PowerShell

To start or stop the clustered SCS instance or the database instance with PowerShell do the following:

1. To start the clustered database instance, open PowerShell in elevated mode, and enter the following command:

start-clusterresource <database resource>

2. To start the clustered SCS instance, open PowerShell in elevated mode, and enter the following command:

This is custom documentation. For more information, please visit the SAP Help Portal 24
7/12/2023
start-clusterresource "SAP <SAPSID> <Instance_Number> Instance"

3. To stop the clustered SCS instance, open PowerShell in elevated mode, and enter the following command:

stop-clusterresource "SAP <SAPSID> <Instance_Number> Instance"

4. To stop the clustered database instance, open PowerShell in elevated mode, and enter the following command:

stop-clusterresource <database resource>

Using the Failover Cluster Manager

For all other non-clustered instances, such as additional application server instances or the primary application server
instance, you must use the SAP MMC or SAPControl.

1. Start the Failover Cluster Manager by choosing Start Administrative Tools Failover Cluster Manager .

2. To start the SCS instance, select the relevant service and application SAP <SAPSID>.

In the right-hand pane, under Other Resources, right-click the resource SAP <SAPSID> <Instance_Number>
Instance, and choose Bring this resource online.

3. To stop the SCS instance, select the relevant service and application SAP <SAPSID>.

In the right-hand pane, under Other Resources, right-click the resource SAP <SAPSID> <Instance_Number>
Instance, and choose Take this resource offline.

This is custom documentation. For more information, please visit the SAP Help Portal 25

You might also like