HADR Users Guide: Public SAP Adaptive Server Enterprise 16.0 SP04 Document Version: 1.0 - 2022-04-15
HADR Users Guide: Public SAP Adaptive Server Enterprise 16.0 SP04 Document Version: 1.0 - 2022-04-15
HADR Users Guide: Public SAP Adaptive Server Enterprise 16.0 SP04 Document Version: 1.0 - 2022-04-15
2 Installation Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1 Requirements and Restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
2.2 System Resource Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Capacity Planning and Sizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Application Compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
HA Aware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Replication Limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Accessing the ASE Cockpit Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Unsupported Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11 Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.1 Troubleshooting the HADR System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.2 Recovering from Errors in an HADR System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386
11.3 Recovering the Primary Data Server If SAP Replication Server is Unavailable. . . . . . . . . . . . . . . . . 396
11.4 Restarting the Primary Data Server Without Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
11.5 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
SAP Installer Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Recovering from a Failed Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Performing a Teardown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Removing an HADR Environment Using the removehadr Utility. . . . . . . . . . . . . . . . . . . . . . . . 404
11.6 Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Utilities for Monitoring the HADR System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .406
Monitoring the Replication Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
11.7 Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .408
Troubleshooting the Replication System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Troubleshooting the RMA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Checking RMA Version from the Executable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Troubleshooting Data That is Not Replicating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Troubleshooting a Secondary Truncation Point That is Not Moving. . . . . . . . . . . . . . . . . . . . . . 421
11.8 Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11.9 Failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11.10 Access, and Login Redirection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Troubleshooting Replication Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
11.11 Troubleshooting the Fault Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .430
Fault Manager and SAP Host Agent Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
11.12 Configuring the RMI Port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
The always-on option is a high-availability and disaster recovery (HADR) system that consists of two SAP ASE
servers: one designated as the primary on which all transaction processing takes place, the other acts as a
warm standby (referred to as a "standby server" in DR mode, and as a "companion" in HA mode) for the
primary server, and which contains copies of designated databases from the primary server.
Note
● The HADR feature included with SAP ASE supports only a single-companion server. Versions SP03
PL03 and later support a three node architecture, with a companion SAP ASE server and a disaster
recovery node.
● You can manage multiple databases (certified with 20) at the same time in an HADR environment for
custom applications using SAP Replication Server version 16.0 SP03 PL04 and later.
Some high-availability solutions (for example, the SAP Adaptive Server Enterprise Cluster Edition) share or use
common resources between nodes. However, the HADR system is a "shared nothing" configuration; that is,
each node has separate resources, including disks.
In an HADR system, servers are separate entities, and data is replicated from the primary server to the
companion server. If the primary server fails, a companion server is promoted, either manually or
automatically, to the role of primary server. Once the promotion is complete, clients can reconnect to the new
primary server, and see all committed data, including data that was committed on the previous primary server.
The HADR system includes an embedded SAP Replication Server, which synchronizes the databases between
the primary and companion servers. SAP ASE uses the Replication Management Agent (RMA) to communicate
with Replication Server, and SAP Replication Server uses Open Client connectivity to communicate with the
companion SAP ASE.
Note
The always-on option, which provides the HADR solution, requires the ASE_ALWAYS_ON license.
The Replication Agent detects any data changes made on the primary server, and sends them to the replicate
SAP Replication Server. In the figure above, the unidirectional arrows indicate that, although both SAP
Replication Servers are configured, only one direction is enabled at a time.
The HADR system supports synchronous replication between the primary and standby servers for high
availability so the two servers can keep in sync with zero data loss (ZDL). This requires a network link that is
fast enough between the primary and standby server so that synchronous replication can keep up with the
primary server's workload. This means that the network latency is approximately the same speed as the local
disk I/O speed, generally 5 milliseconds or less. Anything longer than a few milliseconds may result in a slower
response to write operations at the primary.
The HADR system supports asynchronous replication between the primary and standby servers for disaster
recovery. The primary and standby servers using asynchronous replication can be geographically distant,
meaning they can have a slower network link. With asynchronous replication, Replication Agent Thread
captures the primary server's workload, which is delivered asynchronously to SAP Replication Server, which
then applies these workload change to the companion server.
The most fundamental service offered by the HADR system is the failover—planned or unplanned—from the
primary to the companion server, which allows maintenance activity to occur on the old primary server while
applications continue on the new primary.
The HADR system provides protection in the event of a disaster: if the primary server is lost, the companion
server can be used as a replacement. Client applications can switch to the companion server, and the
companion server is quickly available for users. If the SAP Replication Server was in synchronous mode before
the failure of the primary server, the Fault Manager automatically initiates failover, and HA-aware applications
are transparently failed over with zero data loss. There may be some data loss if the SAP Replication Server
was in asynchronous mode, in which case you use manual intervention to failover for disaster recover. If the
HADR cluster is in asynchronous mode, applications are not transparently failed over because the cluster
anticipates that it may need to perform some corrective actions concerning potential data loss prior to starting
new transactions.
Connection attempts to the companion server without the necessary privileges are silently redirected to the
primary companion via the login redirection mechanism, which is supported by Connectivity libraries (see
Developing Applications for an HADR System [page 354]). If login redirection is not enabled, client connections
fail and are disconnected.
● SAP ASE
● SAP Replication Server
There are a number of tasks you perform prior to installing and configuring the HADR system, including
reviewing the recommendations and restrictions, planning the system's capacity and sizing, and verifying your
application compatibility.
There are a number of requirements and restrictions for the HADR system.
Note
Set the tcp parameter net.ipv4.tcp_retries2 to 8 for all hosts in the HADR system including
the Fault Manager host. See Adjusting the Client Connection Timeout for Linux in Configuration
Guide for UNIX for more details.
○ HP IA-64
○ AIX 64-bit
○ Windows x64
● Operating system version and patch levels – see HADR Operating System Support List or SAP Note
2489781 .
● (Windows) Install Microsoft Visual C++ redistributable package for VS2013. If this package is not available,
the Fault Manager installer fails, and you see this message:
● Use the same platform for both the primary and the companion servers. For example, you cannot have a
primary server on Solaris and a companion on Linux.
● Create an operating system user named "sybase" to install and maintain the software.
● The HADR system requires two hosts. Installing Fault Manager requires a separate third host. The Fault
Manager host must has the same platform as the HADR nodes. For example, if you install the HADR nodes
on Linux x64, the Fault Manager host must also be installed on Linux x64.
● Synchronous replication requires a solid state drive (SSD) or other fast storage device.
● (Linux) Fault Manager requires GLIBC version 2.7 or later.
● (HP) Fault Manager requires the C++ libCsup11.so.1 library.
● A cluster ID (CID) database. The cluster ID is a three-letter identifier for the cluster. The CID database is for
internal use only.
● Do not use the sp_dbextend system procedure to extend the size of databases. Doing so can result in the
active and primary databases not being the same size, and replication to the standby server could be
blocked. Additionally, mismatched database and device sizes can cause access to applications and
rematerialization efforts to block after a failover.
● HADR is not supported on the Developer and Express Edition licenses.
● Business Suite does not support SAP ASE Cockpit.
● Replication of a table without unique keys if there is a Java object datatype in the table.
Installing the HADR system with disaster recovery includes a number of system resource requirements.
● Each database that participates in HADR, including the master database, requires a minimum of 2GB of
space for the simple persistent queue (SPQ).
● Each database from any volume likely requires an additional CPU core for processing at the replicate
system. High volume databases or databases with very wide tables may require additional CPU cores. If
the replicate SAP ASE is used only for DR purposes, it is probably using a small amount of processing
power, so its CPU capacity is likely available to other servers. However, if the standby system is used for
reporting, additional CPU capacity may be needed for the system.
● HADR components may need approximately 2GB of memory for each replicated database. Since SAP ASE
typically uses pinned shared memory segments that are preallocated (unlike CPU, which is not),
Replication Server cannot easily share memory with SAP ASE. As a result, this 2GB of memory is in
addition to other SAP ASE requirements.
● Replication Server components requires 3 consecutive ports, beginning with the Replication Server port
number (for example, 5005, 5006, and 5007) and the RMA requires 5 consecutive ports, ending with the
specified port number (for example 4988, 4989, 4990, 4991, and 4992). These ports must be accessible
from the other hosts involved in the HADR system, including the Fault Manager hosts.
When you configure Replication Server, use the primary transaction log and the expected rate of primary
transaction log generation as the key parameters to tune and optimize the replication environment to ensure
optimal performance.
SAP recommends the following sizes for the server resources in an HADR system:
Small 7 4 2 1
Medium 15 8 4 2
Large 25 16 8 4
Extra Large 25 24 16 8
These examples describe the tuning parameters (in terms of storage and computing power, in GBs and CPUs,
respectively) that are used to achieve the best performance for a given rate of output log generation:
● If the rate of primary log generation is 3.5 GB per hour and the acceptable latency is less than five seconds,
use the sap_tune_rs command to configure Replication Server with 4 GB of storage and 2 CPUs. For
example:
This example assumes there is a single user database being replicated; user sites can have many more
than this. See Tuning the HADR Components [page 337] and sap_tune_rs [page 545].
● Include a single large transaction thread if there are occasional large transactions in your applications by
setting dsi_num_large_xact_threads=1. See the Replication Server Administration Guide - Volume 2 >
Performance Tuning > Using parallel DSI threads > Configuring parallel DSI for optimal performance.
● If the rate of primary log generation is considerably high (for example, 11 GB per hour) and the acceptable
latency is less than five seconds, configure the Replication Server with 8 GB of storage and 4 CPUs as the
computing power.
● Use sp_spaceused syslogs or select loginfo '<dbname>', 'active_pages' to measure the
primary server's transaction log for a period of time.
Note
Make sure you disable trunc on checkpoint or any other commands that truncate the transaction
log. The transaction log should not be truncated when measuring the transaction log generation rate.
The following example takes a sample of the log generation rate on the primary server for the user_DB
database (first verifying that trunc log on chkpt is disabled on this database). This example uses a 10-
minute period of time, which is short for a production system. Your site should use a longer time period to
view a reasonable output:
See the Reference Manual: System Procedures for more information about sp_spaceused.
The sap_tune_rs command accepts these input parameters: <site_name>, memory_limit, and number
of CPUs. Replication Server uses these parameters to achieve the best performance while keeping latency to
a minimum.
Use sap_tune_rs command and the appropriate parameters to display the modified Replication Server
configuration for an overview of the modified Replication Server tunings (configuration parameters).
sap_tune_rs command prompts you to restart Replication Server for the sap_tune_rs modifications to take
effect.
● HADR-aware clients – using connectivity, drivers receive HA failover event notification and utilize features
that identify when, and to which server, to reconnect after a failover event. The application takes necessary
actions to re-establish the context (for example, the default database, set options, prepared statements,
and so on), and resubmits incomplete transactions. If the application has created any intermediate data in
non-replicated databases as part of re-establishing the context (for example, in tempdb), regenerate this
intermediate data as well.
● HA-aware clients – use existing HA failover mechanisms to move to the new primary server after a failover
event. However, they must re-establish context in the new primary server and resubmit incomplete
transactions (with the configuration changes), similar to HADR-aware clients.
● Cluster Edition-aware clients – similar to HADR and HA-aware clients, connections from Cluster Edition-
aware clients automatically reconnect to the new primary server after a failure. In addition, similar to HADR
and HA-aware clients, the application should include functionality to re-establish context and resubmit
failed transactions.
2.4.1 HA Aware
Required changes to CTLIB applications running in an HADR system include setting the CS_HAFAILOVER
property, modifying the interfaces file, writing application failover messages, and adding return codes.
Procedure
1. Set the CS_HAFAILOVER property using the ct_config and ct_con_props CTLIB API calls. Set this
property at either the context or the connection level using the following syntax:
2. Modify the interfaces file so clients fail over to the secondary companion.
The interfaces file includes a line labeled hafailover that enables clients to reconnect to the secondary
companion when the primary companion crashes, or when you issue a shutdown with nowait, which
triggers a failover.
3. Write application failover messages according to these parameters:
○ As soon as the primary begins to go down, clients receive an informational message that failover is
about to occur. Treat this as an informational message in the client error handlers.
○ After you set the CS_HAFAILOVER failover property and the interfaces file has a valid entry for the
hafailover server, the client connection is a failover connection, and clients reconnect to the
secondary companion appropriately.
However, if the failover property is set but the interfaces file does not have an entry for the
hafailover server (or vice-versa), there is no failover connection but rather, a normal connection
with the failover property turned off. Inform the user to check the failover property to determine
whether the connection is a failover connection.
4. Add return codes.
When a successful failover occurs, the client issues a return value named CS_RET_HAFAILOVER, which is
specific to the following CTLIB API calls:
ret = ct_send(cmd)
CS_RET_HAFAILOVER is returned from the API call during a synchronous connection. In an asynchronous
connection, these API s issue CS_PENDING, and the callback function returns CS_RET_HAFAILOVER.
Rebuild your applications, linking them with the libraries included with the failover software.
Note
In a Custom Application environment, you cannot connect clients with the failover property until you
issue sp_companion resume. If you do try to reconnect them after issuing sp_companion
prepare_failback, the client stops responding until you issue sp_companion resume.
A sp_primarykey designation is insufficient for application tables in databases participating in the HADR
system; they require a primary key constraint or a unique index. Although some tables may work without a
primary key, operations such as normal updates and deletes as well as inserts, updates and deletes of large
object (LOB) data may be extremely slow, and this dramatically increases latency, which in turn significantly
increases the failover time.
Restrictions for Columns that Use Float Or Real Datatypes as Primary Key
Columns
Application tables should not use columns that use float or real datatypes as primary key columns. The
interpretation of approximate numerics such as float or real is often offloaded to floating point processors
on the CPU. Different CPU versions may have different floating point unit (FPU) versions, so even the same
CPU hardware (such as Intel x86/64 Haswell EX) may translate the floating point values differently. Because
the HADR system uses logical replication, update and delete where clauses constructed from primary keys
with float or real datatypes may return 0 rows affected if the inserted float or real value was interpreted
differently by the underlying hardware. Due to replication validation at the replicate, this condition would result
in the HADR system suspending delivery to the standby server until the problem is fixed.
Application tables with primary keys based on sequential or monotonically increasing values should not
perform multirow updates on the primary keys using an expression.
In this example, pcol is the primary key for the reptbl table in the primary database, and it includes three
rows with values of 1, 2, and 3:
pcol
-----------
1
2
3
Running this command may cause errors or incorrect data in the replicate database:
update reptbl
set pcol = pcol + 1
The values for pcol after running the command at the primary database are:
pcol
-----------
2
3
4
Replication Agent retrieves the log records and submits the records to Replication Server using commands
similar to:
update reptbl
set pcol = 2 where pcol = 1
update reptbl
set pcol = 3 where pcol = 2
update reptbl
set pcol = 4 where pcol = 3
However, because Replication Server treats each row as an independent update, the first row is updated three
times and the second row is updated twice. If there is a unique index on the table, the additional updates cause
errors in the replicate databases. If the replicate table does not contain a unique index, this table will have
duplicate rows, as shown here.
If the standby system is used for reporting, the reporting applications cannot be sensitive to timing differences
caused by cross-database transactions. For example, if a transaction inserts data into database_A and
database_B on the primary, because these inserts proceed independently and in parallel through the HADR
system, they may be applied in a different order at the replicate databases. This may result in brief data
inconsistencies for reports that query across both databases.
This restriction holds true for cross-database declarative constraints in which two independent transactions at
the primary insert into database_A and database_B, respectively, and a foreign key exists from database_B
to database_A. At the standby, due to independent and parallel processing of the different database log
records, the child insert in database_B may happen ahead of the parent row in database_A (the HADR
system is able to suspend DSI enforcement for write operations).
Configuration Restrictions
Do not set these configuration parameters to 0 at both the server and the connection levels:
● sqm_cmd_cache_size
● sqt_max_cache_size
● dsi_sqt_max_cache_size
SAP Adaptive Server Enterprise Cockpit (ASE Cockpit) is an administration tool for managing and monitoring
SAP ASE and the HADR system. ASE Cockpit supports SAP ASE version 16.0 SP02.
ASE Cockpit provides availability monitoring, historical monitoring, and real-time monitoring. It offers real-time
alerts of availability, performance, and capacity issues, intelligent tools for spotting performance and usage
trends, as well as the general health of the HADR system. Availability, performance, and capacity alerts are
configured and enabled by default. Unlike SAP Control Center, SAP ASE Cockpit is designed as an onboard
management solution, where you install the cockpit on each SAP ASE host to manage and monitor that
system.
The HADR system does not support the full functionality of SAP ASE version 16.0 SP03.
The steps for installing the HADR system differ depending on whether you are installing a completely new
environment, using an existing SAP ASE for the primary, using the SAP installer with or without a response file,
or installing using the setuphadr utility.
● Installing a new system – see Installing a New System [page 29] and Using setup.bin or setupConsole.exe
with a Response File [page 66]
● Using an existing SAP ASE for the primary companion – see Installing HADR with an Existing System [page
85]
You can configure, or migrate an existing, HADR system now or run the $SYBASE/$SYBASE_ASE/bin/
setuphadr utility to configure HADR at later time. See Migrating an SMP Server to an HADR System [page
86].
SAP ASE supports the HADR system in a remotely distributed two-node topology with primary and companion
sites. Run the installer separately on both sites.
Note
Installing the software on the first node prepares SAP ASE and Backup Server for the HADR system.
However, the actual setup of the HADR system occurs when you install the software on the second node.
The HADR system requires that the always-on nodes use SSD (solid-state drive) or another type of fast storage
device for Replication Server if you configure the HADR system with synchronous replication mode.
See Migrating an SMP Server to an HADR System [page 86] for information about migrating an existing SAP
ASE server to an HADR system.
Installing the SAP HADR system requires you to enter and re-enter values for the primary, standby, and Fault
Manager hosts. Record the values in a worksheet as you go through the installation to use as future reference.
Sample (or default) values used in this guide are provided in square brackets.
Note
[/work/SAP1]
[No]
3) Data directory
[/work/SAP1/data]
[SFSAP1]
[SJSAP2]
7) Technical user
[tech_user]
9) Host name
[SFMACHINE1]
[SJMACHINE2]
[5000]
[/work/SAP1/ASE-16_0/
install/SFSAP1.log]
[Mixed (OLTP/DSS)]
[4k]
[us_english]
[iso_1]
[bin_iso_1]
[SFSAP1_BS]
[5001]
[/work/SAP1/ASE-16_0/
install/SFSAP1_BS.log]
[None]
[AS1]
[sync]
[SFHADR1]
[SJHADR2]
[/work/SAP1/data]
[7000]
Note
RMA RMI occupies five consecu
tive ports, with the configured port
occupying the highest number. If
the configured RMA RMI port num
ber is 7000, for example, it also
needs ports 6999, 6998, 6997, and
6996.
[7001]
[5005]
[/work/SAP1/data]
[256MB]
[/work/SAP1/data]
[2000MB]
[DR_maint]
[DR_admin]
[SFMACHINE1]
[SJMACHINE2]
[4282]
[4283]
[4998]
[4992]
[sccadmin]
[uafadmin]
[sapadm]
[13797]
[13777]
[13787]
The HADR system uses the SAP Host Agent to perform several lifecycle management tasks, such as operating-
system monitoring, database monitoring, system instance control, and provisioning.
You can install the SAP Host Agent when you install the HADR system (see Unloading the SAP ASE Binaries) or
manually at another time. SAP Host Agent requires root or "sudo" permission to install. The SAP Host Agent
needs to run as root to perform these operations:
See Installing SAP Host Agent Manually to manually install the SAP Host Agent.
Note
Before you install Fault Manager on a third host, install SAP Host Agent on the hosts running SAP ASE, and
set the sapadm operating system password.
Installing the SAP Host Agent manually requires a .SAR file, which is located in archives directory of the
location where you extracted the installation image.
Starting and stopping the SAP Host Agent requires sudo or root privileges.
The SAP Host Agent is usually started automatically when you restart the operating system. You can also
manually control it using the saphostexec program.
● (Windows)
● (UNIX)
Where <ProfilePath> is path to the profile file ( host_profile) of SAP Host Agent. On UNIX, issue the ps
command to determine the profile path if Fault Manager is running (see the bold text):
ps -ef|grep sap
root 11727 1 0 Dec11 ? 00:00:06 /usr/sap/hostctrl/exe/
saphostexec pf=/usr/sap/hostctrl/exe/host_profile
sapadm 11730 1 0 Dec11 ? 00:00:24 /usr/sap/hostctrl/exe/
sapstartsrv pf=/usr/sap/hostctrl/exe/host_profile -D
root 11764 1 0 Dec11 ? 00:02:54 /usr/sap/hostctrl/exe/saposcol -
l -w60 pf=/usr/sap/hostctrl/exe/host_profile
sap 24316 22274 0 12:18 pts/5 00:00:00 grep sap
By default the host_profile file is located in the executable directory. Option is one of:
See SAP Host Agent Reference - Command Line Options of the saphostexec Executable for a complete list of
the saphostexec options.
See Installing HADR with an Existing System [page 85] for information about installing and configuring an
HADR system using an existing SAP ASE server.
If it does not yet exist, the installer creates the target directory and installs the selected components into that
directory. At the end of the installation, verify that the product has installed correctly. You may also need to
perform additional configuration procedures.
Procedure
1. Insert the installation media in the appropriate drive, or download and extract the SAP ASE installation
image from Software Downloads of the SAP Support Portal at http://support.sap.com/swdc .
2. (UNIX) Verify the stack size limit is at least 8192. To check the stack size limit, enter:
○ On the Bourne shell – ulimit –s
○ On the C-shell – limit stacksize
Make sure the <LANG> is set to C or any other value that is valid on your system. By default, the <LANG>
environment variable on UNIX is set to POSIX, which can cause the installation to fail on the secondary
system.
3. (AIX only) Set the data size limit to "unlimited":
○ On the Bourne shell – ulimit -d unlimited
○ On the C-shell – limit datasize unlimited
4. If you downloaded the product from SAP Service Marketplace, login as the "sybase" user, or the user you
added with installation and configuration privileges, and change to the directory where you extracted the
installation image.
5. Start the installer:
./setup.bin
The location of the mount command is site-specific and may differ from the instructions shown here. If you
cannot mount the drive using the path shown, check your operating system documentation or contact
your system administrator.
Note
mount commands and arguments vary according to the platforms. See the installation guide for your
platform for more information.
cd /mnt/<device_name>
./setup.bin
Where
○ <device_name> is the directory (mount point) you specified when mounting the CD or DVD drive.
○ setup.bin is the name of the executable file name for installing SAP ASE.
Use the -r parameter to record your inputs in a response file when you run the SAP installer:
./setup -r <path_to_response_file>
For example:
./setup -r /work/SAP1_response_file.txt
See Installing the HADR System with Response Files, Console, and Silent Mode [page 66].
If there is not enough disk space in the temporary disk space directory, set the IATEMPDIR environment
variable to <tmp_dir> (<TEMP> on Windows) before running the installer again, where <tmp_dir> is
where the installation program writes the temporary installation files. Include the full path to <tmp_dir>.
8. If you are prompted with a language selection list, specify the appropriate language.
9. On the Introduction screen, click Next.
Note
Note
The machine hosting the SAP Host Agent requires that you have sudo permission. The Fault Manager
requires the SAP Host Agent and the sapadm operating system user created by the SAP Host
Agent installation. If you do not have sudo permission, the system administrator can install SAP Host
Agent later.
On Windows, enter a password for the sapadm operating system that adheres to the operating system
password requirements, such as length, number of characters and digits, and so on.
16. Select the appropriate license type.
Note
To configure the HADR installation now, continue to Configuring SAP ASE [page 41].
To configure HADR at a later time, use the $SYBASE/$SYBASE_ASE/bin/setuphadr utility. See Installing
HADR with an Existing System [page 85].
Prerequisites
These steps assume you have successfully completed the steps in the previous topic, Unloading the SAP ASE
Binaries [page 29].
The HADR system requires Backup Server. The default language, character set, sort order, page size, system,
user database sizes, and passwords for SAP ASE and Backup Server must be the same on the primary and
Procedure
1. The Configure New Servers screen shows a list of all items you can minimally configure. By default, all
products are selected. Verify the appropriate selections for your site are selected, and click Next.
2. On the Configure Servers with Different User Account screen, indicate whether you are configuring the
servers under a different user account.
3. In the User Configuration Data DirectoryNext. This directory is your install directory, and the value mapped
to $SYBASE. Make sure you have correct permissions—and sufficient space—to create the directories.
Option Description
SAP ASE Name Server name (do not include underscores in the name).
System Administrator's Enter and confirm your password. Use the same value for both the primary and
Password standby sites. screen, accept the default directory or enter a new path to
specify where to install the SAP ASE binaries, then click
Enable SAP ASE for Select to enable SAP ASE Cockpit to monitor SAP ASE.
SAP ASE Cockpit
Monitoring
Technical User screen, accept the defaultSelect and confirm the technical user name and
password if you are enabling SAP ASE Cockpit monitoring.
Error Log Name and location of the error log file. Defaults to servername.log.
Application Type (Must be the same on the primary and standby sites to be in sync with page
size, default language, and so on) Select one:
○ (Default) MIXED – both OLTP and DSS.
○ OLTP – online transaction processing generally consists of smaller, less
complex transactions.
○ DSS – decision-support systems generally have less update activity with
large complex queries.
Page Size Must be the same on the primary and companion servers:
○ 2 KB
○ (Default) 4 KB
○ 8 KB
○ 16 KB
Default Language Use the same value for both the primary and standby sites. The default is us-
english.
Default Character Set (Must be the same on the primary and standby sites) The default values are:
○ roman8 – HP Itanium
○ cp850 – Windows
○ iso_1 – for other platforms
Default Sort Order (Must be the same on the primary and standby sites) The default values are:
○ bin_roman8 – HP Itanium
○ bin_cp850 – Windows
○ bin_iso_1 – For other platforms
Optimize SAP ASE Check the box to optimize the configuration for your system.
Configuration
Note
If you specify a value that is larger than the available resource for allocation
to the server, the optimize configuration may fail, causing the server to not
start.
Create Sample Select this option for the installer to install sample databases. The installer
Databases automatically calculates any additional space needed for your master device.
Configuration Value
System Procedure Device The full path to the system procedure device.
System Procedure Device Size (MB) The default is 196 MB, regardless of logical page size.
System Procedure Database Size The default is 196 MB, regardless of logical page size.
○ 2 KB – 3 MB
○ (Default) 4 KB – 6 MB
○ 8 KB – 12 MB
○ 16 KB – 24 MB
Tempdb Device Size (MB) The default is 100 MB, regardless of logical page size.
Tempdb Database Size The default is 100 MB, regardless of logical page size.
Configuration Value
Port Number The port number of the Backup Server. A unique port number between 1025 and
65535. The default is 5001.
Allow Hosts (Unnecessary for HADR) Specify any remote hosts you want to use or to connect
to this Backup Server. You can add primary and standby hosts, but this is not
required because the installer updates the information when it builds the HADR
system.
Configuration Value
Port Number The port number of the XP Server. The default is 5002.
Configuration Value
Port Number The port number of the Job Scheduler Agent. The default is 4900.
Configuration Value
Configuration Value
Note
RMA RMI occupies five consecutive ports, with the configured port occupying the highest number.
If the configured RMA RMI port number is 7000, for example, it also needs ports 6999, 6998, 6997,
and 6996.
○ RMA TDS port – The port number for RMA TDS. The default is 7001.
○ Replication Server port – The port number on which Replication Server talks to SAP ASE. The default is
5005.
○ SRS device buffer directory – The directory in which you create the Replication Server buffer devices.
The device buffer is comprised of inbound and outbound queues, which should be located on different
file systems.
○ SRS device buffer size (MB) – The size of the buffer device. The default is 256 MB (recommended three
times the aggregate of all log devices).
○ SRS simple persistent queue directory – The full path to the persistent queue.
If you are configuring the HADR system with synchronous replication, SAP recommends that you
specify a directory on an SSD (solid state drive) or other type of fast storage device for the
Replication Server simple persistent queue directory.
○ SRS simple persistent queue size (MB) – The size of the persistent queue. The default is 2000 MB.
Note
User names and passwords must be the same on both primary and secondary companion servers. The
user name must start with an alphabetic character and cannot exceed 30 characters in length.
Passwords must have at least 6 characters.
○ ASE HADR maintenance user – Name of the user replicating DML and DDL commands. The default is
DR_maint.
○ ASE HADR maintenance user password – Enter and confirm the user's password.
○ RMA administrator – Name of the Replication Server administrator. DR_admin by default.
○ RMA administrator password – Enter and confirm the administrator password.
Click Next to create the databases manually, and then replicate and load the data later, or to add the
databases to replicate and materialize later from SAP ASE Cockpit.
Note
The HADR system requires a cluster ID database. If you did not enter this database, the installer
creates a 200 MB database by default. You should, however, create a database that is larger than this.
The default size (based on the model database) is too small for an HADR system because some Fault
Manager activities use the CID and generate a lot of log records. However, if you create a user database
before creating the CID database, the size of the CID database uses an appropriate size for user data
replication.
You can increase the size of any database after you finish the HADR setup with the alter database
command (you many need to first create a new disk device with the disk init command), and you
can add the databases to replicate and materialize later from SAP ASE Cockpit.
15. In the User Database to Replicate screen, enter the values for:
Note
Do not select sybmgmtdb database. It is used for the Job Scheduler and should not be replicated.
○ Enter Devices to be created for the new database – Click the Add button and enter:
○ Device type – Data or log device.
○ Logical device – Name of the logical device.
○ Physical device path – Full path to the device.
○ Device size – Size of device, in megabytes.
Click OK to return to the Replicate Databases in ASE HADR screen, then click Next.
16. In the ASE HADR Secondary Site screen choose Yes if the companion site is up, or No if it is not. Click Next.
○ Site name – The name of the site for the HADR system (value must be different from the name of
the primary site).
○ SAP ASE host name – The name of the machine on which the secondary site SAP ASE is running.
○ SAP ASE installation directory – The directory where SAP ASE was installed.
○ SAP ASE Name – The name of the secondary server.
○ SAP ASE port – The number of the port on which the secondary Backup Server is listening.
○ Backup Server Name – The name of the Backup Server.
○ Backup Server port – The number of the port on which the Backup Server for the secondary server
is listening.
○ Database dump directory – The default directory in which secondary server performs dumps.
○ RMA RMI port – The port number for the RMA RMI. The default is 7000.
Note
RMA RMI occupies five consecutive ports, with the configured port occupying the highest
number. If the configured RMA RMI port number is 7000, for example, it also needs ports
6999, 6998, 6997, and 6996.
○ RMA TDS port – The port number for the RMA. The default is 7001.
○ Replication Server port – The port number on which the secondary Replication Server talks to SAP
ASE.
b. If you installed the SAP ASE Cockpit, set the Cockpit Hosts and Ports option.
You can accept the default options, or specify other, unused ports, to ensure that the port numbers do
not conflict with those used by other applications or services on your system, then click Next:
○ Host Name – is the name of the machine on which you are installing cockpit.
○ HTTP Port – choose an integer between 1025 and 65535 (the default is 4282).
○ HTTPS Port – choose an integer between 1025 and 65535 (the default is 4283).
○ TDS Port – choose an integer between 1025 and 65535 (the default is 4998).
○ RMI Port – choose an integer between 1025 and 65535 (the default is 4992).
When the installation is finished, you see the Installation Completed screen. Click Done.
If you have not yet configured the secondary site, repeat the installation steps there to complete the
SAP ASE database HADR configuration. The installer generated a response file named after the site
you last created ($SYBASE/log/companion_responses.txt or $SYBASE/log/
primary_responses.txt). Copy this to the other site and use it to configure the next server using
the -f parameter.
The order does not matter for new installations, so you can install a companion or primary server first.
If both the primary and companion sites are configured, run the Fault Manager installer on a third host. The
Fault Manager installer is located in <ASE_installer>/FaultManager. See Installing and Configuring
the Fault Manager [page 109]. The installer generated a response file named $SYBASE/log/
fault_manager_responses.txt, which you use to install and configure the Fault Manager.
Note
(UNIX only) The installation process installs the SAP Host Agent and creates the sapadm operating
system login, but its login's password is not set. Run sudo passwd sapadm on the primary and
companion sites to set this login's password before you install the Fault Manager.
In addition to the SAP installer, you can use response files, and console and silent mode to install the HADR
system.
Using setup.bin (setupConsole.exe on Windows) with a response file allows you to automate the HADR
installation or install the Fault Manager.
./setup.bin -f <response_file>
Sample response files for the primary server, companion server, and the Fault Manager are located in:
In addition to these response files, you can generate your own by:
● Running the SAP installer with the -r <response_file> parameter to record your selections to a
response file. The -r parameter requires an absolute path.
● Generating responses file in the $SYBASE/log directory based on your input from the GUI installation for
the primary and companion servers. However, these response files will be incomplete, and you cannot use
these in a silent mode installation (the installation prompts you for missing information).
Edit each of these response files for your site, and run them with the setup.bin installer to install and
configure the HADR system.
If you do not include the passwords in the response files, setup.bin prompts you for them during the console
installation.
For example, if you edit the response file for HADR and rename it SFASE1_response.txt, the output looks
similar to:
./setup.bin -f /work/SAP1/SFASE1_response.txt
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
If you edit the Fault Manager response file for your site and rename it FM_response.txt, the output looks
similar to:
./setup.bin -f /work/SAP1/FM_response.txt
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
Graphical installers are not supported by the VM. The console mode will be used
instead...
===============================================================================
Fault Manager (created with InstallAnywhere)
-------------------------------------------------------------------------------
Preparing CONSOLE Mode Installation...
===============================================================================
Introduction
------------
InstallAnywhere will guide you through the installation of Fault Manager 1.0
GA.
It is strongly recommended that you quit all programs before continuing with
this installation.
Before you proceed, make sure that:
* SAP ASE, Replication Management Agent (RMA), Replication Server, and SAP Host
Agent are set up and running on the primary and companion sites.
* "sapadm" operating system user has a valid password on the primary and
companion sites.
Respond to each prompt to proceed to the next step in the installation. If you
want to change something on a previous step, type 'back'.
You may cancel this installation at any time by typing 'quit'.
PRESS <ENTER> TO CONTINUE:
Below is a sample response file that uses the inputs from Configuring SAP ASE [page 41] to install the HADR
system described there. The changed responses are in bold.
##############################################################################
# HADR sample responses file for SAP Adaptive Server Enterprise.
Context
The steps for installing components in an interactive text mode are the same as when installing in GUI mode,
except you use the following command to execute the installer from the command line, and you enter text to
specify installation options:
./setup.bin -i console
(Windows)
setupConsole.exe -i console
Procedure
(UNIX)
./setup.bin -i console
(Windows)
setupConsole.exe -i console
./setup.bin -i console
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
==============================================================================
=
SAP Adaptive Server Enterprise (created with
InstallAnywhere)
------------------------------------------------------------------------------
-
Preparing CONSOLE Mode Installation...
==============================================================================
=
Introduction
------------
InstallAnywhere will guide you through the installation of SAP Adaptive Server
Enterprise 16.0 SP02 GA.
It is strongly recommended that you quit all programs before continuing with
this installation.
Respond to each prompt to proceed to the next step in the installation. If
you
want to change something on a previous step, type 'back'.
You may cancel this installation at any time by typing 'quit'.
PRESS <ENTER> TO CONTINUE:
To perform a silent, or unattended installation, run the installer and provide a response file that contains your
preferred installation configuration.
Prerequisites
Create a response file based on the instructions from Using setup.bin or setupConsole.exe with a Response File
[page 66].
Context
Edit these response files for your site, and run them with the setup.bin installer to install and configure the
HADR system. See Sample Response Files [page 67] for an example of an edited response file based on the
choices in Configuring SAP ASE [page 41].
Procedure
1. Run the following, where <responseFileName> is the absolute path of the file name containing the
installation options you chose:
Where <response_file> is the absolute path of the file name to the response file.
You must agree to the SAP License Agreement when installing in silent mode. You can either:
○ Include the option -DAGREE_TO_SAP_LICENSE=true in the command line argument, or
○ Edit the response file to include the property AGREE_TO_SAP_LICENSE=true.
Except for the absence of the GUI screens, all installation actions are the same, and the result of an
installation in silent mode is exactly the same as one performed in GUI mode using the same responses.
2. The installer for SAP ASE requires non-null passwords for the sa login in SAP ASE, and uafadmin and
sccadmin logins in SAP Control Center. For this reason, add these rows to the response file:
○ SY_CFG_ASE_PASSWORD=<SAP ASE sa password>
○ CONFIG_SCC_CSI_SCCADMIN_PWD=<SCC admin password>
○ CONFIG_SCC_CSI_UAFADMIN_PWD=<SCC agent admin password>
○ CONFIG_SCC_REPOSITORY_PWD=<SCC repository password>
Each password must be at least six characters long. The sccadmin and uafadmin logins need not be the
same as the sa password. You can also set these passwords using these environment variables:
○ SY_CFG_ASE_PASSWORD
○ CONFIG_SCC_CSI_SCCADMIN_PWD
○ CONFIG_SCC_CSI_UAFADMIN_PWD
○ CONFIG_SCC_REPOSITORY_PWD
Note
The software installer installs and configures the SAP ASE Cockpit by default. Performs these steps on each
host if you installed the HADR system from the command line.
Context
These steps assume your HADR system (including the primary and standby servers, Replication Server, and
RMA) is running.
Procedure
cd /work/SAP1/COCKPIT-4/plugins/SFSAP1
If this folder does not exist, copy the contents of the $SYBASE/COCKPIT-4/templates/
com.sybase.ase to $SYBASE/COCKPIT-4/plugins/. For example:
cp -r $SYBASE/COCKPIT-4/templates/com.sybase.ase $SYBASE/COCKPIT-4/plugins/
SFSAP1
5. Re-encrypt the passwords for the SAP ASE, Replication Server, and RMA users. You perform these steps
on each node because the generated encrypted passwords are different on each host:
a. Move to $SYBASE/COCKPIT-4/bin.
b. Execute the passencrypt password encryption utility:
./passcrypt
source SYBASE.csh
cd $SYBASE/COCKPIT-4/bin
./cockpit.sh --start
Starting Cockpit Server...
---------- SYSTEM INFO ----------
Home Directory: /work/SAP1/COCKPIT-4
Version: Cockpit Server 4 SP11
Node: SFMACHINE1(10.173.1.109)
Log Message level: WARN
Platform: linux
Bitwidth: 64
OS Name: Linux
OS Version: 2.6.32-504.8.1.el6.x86_64
OS Architecture: amd64
Available Processors: 4
Total Physical Memory: 5974 MB
9. The SAP ASE Cockpit is running when the prompt appears. Issue shutdown if you see any errors, fix the
appropriate lines in agent-config.xml, then restart the SAP ASE Cockpit.
cockpit.sh displays the connection URL for the client just before the cockpit prompt (in bold text
above).
10. Launch a browser and open the URL specified by the cockpit.sh output (https://SFMACHINE1:4283/
cockpit in the output above). When the page opens, you may get a warning about this being an untrusted
connection. Click “I understand the risks” and “Add exception.”
11. If necessary, install Adobe Flash.
12. Enter the user name and password to log into SAP ASE Cockpit.
13. When you first connect, SAP ASE Cockpit may issue a statement about inadequate values for
configuration parameters. Enter the appropriate values and click OK.
Note
SAP ASE Cockpit displays the Monitoring tab. You can safely ignore the errors at the bottom of the screen.
14. If you did not configure the cockpit technical user in the SAP ASE Cockpit, configure it now.
a. Select the Explore tab.
b. Select ASE Servers <server_name> Create Cockpit Technical User
Note
If the cockpit technical user is already created (for example, during an install using the SAP
installer), the menu item reads Update Cockpit Technical User, which includes a wizard for updating
a technical user in SAP ASE for cockpit service.
$SYBASE/COCKPIT-4/bin/cockpit --stop
b. Move to $SYBASE/COCKPIT-4/bin:
cd $SYBASE/COCKPIT-4/bin
passencrypt -csi
passencrypt -csi
Password:
e. Copy the output for the new password (see the bold text below):
passencrypt -csi
Password:
{SHA-256:aszZYC3i5Ms=}vvytnz5U95b7UyTMxrRxq7TizJY8R088Ri8IimnAFXU=
f. Move to $SYBASE/COCKPIT-4/conf:
cd $SYBASE/COCKPIT-4/conf
g. Edit the csi_config.xml file. Search for the uafadmin section of the file, and paste the value in the
uafadmin password property between the two double quotes (see the bold text):
You can:
● Create them during installation. See the step describing the ASE HADR Setup Site screen in Configuring
SAP ASE [page 41].
● Add them after installation. See Adding Databases from the Command Line After Installation [page 294].
● Use the RMA commands. See sap_update_replication [page 548].
● Use SAP ASE Cockpit. See Manage SAP ASE > Always-On (HADR) Option > Adding an Existing Database to
the HADR System in the SAP ASE Cockpit documentation
Regardless of the method you use, the databases you create are initially empty. Enter data in the database by
loading a dump taken from previous installation or create new data using your site's resources.
Remove all replication configurations before upgrading to the latest version of SAP ASE, and rematerialize all
databases as part of the HADR configuration.
Context
Remove the existing Replication Server connections, subscriptions, and so on from the environment before
configuring it for HADR.
Procedure
1. Remove subscriptions to database, table, and function replication definitions, articles, or publications. This
example drops the authors_sub subscription for the authors_rep table replication definition:
3. Remove databases from the replication system. This example drops the connection to the pubs2 database
in the SJSAP2 companion:
use <primary_dbname>
go
sp_stop_rep_agent <primary_dbname>
go
b. Disable the secondary truncation point in the SAP ASE database that is being migrated:
use <database_name>
go
dbcc settrunc('ltm', 'ignore')
go
See the installation guide for your platform for instructions on upgrading SAP ASE to version 16.0 SP02.
Note
After you create a new SAP ASE instance, copy the configuration file from the primary server to capture the
configuration values, cache configurations, and to re-create any additional user temporary databases.
The user bindings for tempdb are carried over when the HADR system synchronizes syslogins.
Use the setuphadr utility to migrate an existing SAP ASE server to the HADR system.
Procedure
./setup.bin
b. In the Choose Install Folder screen, enter the current installation directory for the primary server (that
is, $SYBASE):
Note
k. In the Configure New Servers screen, deselect all options as the servers are already configured. Click
Next.
source $SYBASE/SYBASE.csh
4. Restart SAP ASE and Backup Server. Move to the $SYBASE/$SYBASE_ASE/install directory and issue:
./RUN_<server_name>
./RUN_<backup_server_name>
5. Start RMA:
○ UNIX –$SYBASE/$SYBASE_ASE/bin/rma
○ Windows – (recommended) start the Sybase DR Agent Windows service. Alternatively, you can issue
%SYBASE%\%SYBASE_ASE%\bin\rma.bat at the command line.
6. Identify and record these values on the primary server (both the primary and companion servers require
the same values):
○ Device sizes – use sp_helpdevice
○ Database sizes – use sp_helpdb
○ Page sizes – Use this to determine the logical page size:
select @@maxpagesize
○ Default language, character set, sort order – use sp_helplanguage and sp_helpsort
See Collecting Migration Configuration Details [page 101] for information and examples.
7. Save the $SYBASE/ASE-16_0/<server_name>.cfg file.
After you unload the binaries and the installer displays the Configure New Servers screen, deselect all the
options and click Next.
10. Set the environment variables. Source the SYBASE.csh or SYBASE.sh files:
source $SYBASE/SYBASE.csh
11. Use the srvbuild or srvbuildres utility (syconfig.exe or sybatch.exe on Windows) to create the
companion server and Backup Server (and, if necessary, XP Server and Job Scheduler). srvbuild or
srvbuildres also allow you to create the technical user and enable SAP ASE Cockpit monitoring. See the
Configuration Guide > Configuring New Servers with srvbuild. Make sure that the primary and companion
servers use the same:
○ Application type
○ Logical page size
shutdown
16. Copy the <server_name>.cfg from the primary server to the companion server (the default location of
the <server_name>.cfg file is in <$SYBASE/$SYBASE_ASE>).
17. Start SAP ASE using the newly copied <server_name>.cfg by moving to the $SYBASE/ASE-16_0/
install directory and issuing:
./RUN_<server_name>
Note
If you have not already done so, create the cluster ID database. Enter a value of 200 MB or larger if you
specified the ase_data_device_create_* and ase_log_device_create_* properties for the
cluster ID database.
setup_site=<primary_site>
is_secondary_site_setup=false
See Sample setup_hadr.rs Response File [page 103] for examples of the changes required.
./ASE-16_0/bin/setuphadr setup_SFHADR.rs
Setup ASE server configurations
Set server configuration "max network packet size" to "16384"...
Reboot SAP ASE "SFSAP1"...
Setup ASE server configurations...Success
Setup user databases
Create user database AS1...
Set "pubs2" database "trunc log on chkpt" option to "false"...
Setup user databases...Success
Setup ASE HADR maintenance user
Create maintenance login "DR_maint"...
Grant "sa_role" role to "DR_maint"...
Grant "replication_role" role to "DR_maint"...
Grant "replication_maint_role_gp" role to "DR_maint"...
Create "sap_maint_user_role" role...
Grant set session authorization to "sap_maint_user_role"...
Grant "sap_maint_user_role" role to "DR_maint"...
Add auto activated roles "sap_maint_user_role" to user
"DR_maint"...
Allow "DR_maint" to be known as dbo in "master" database...
Allow "DR_maint" to be known as dbo in "AS1" database...
Allow "DR_maint" to be known as dbo in "pubs2" database...
Setup ASE HADR maintenance user...Success
Setup administrator user
Create administrator login "DR_admin"...
Grant "sa_role" role to "DR_admin"...
Grant "sso_role" role to "DR_admin"...
Grant "replication_role" role to "DR_admin"...
Grant "hadr_admin_role_gp" role to "DR_admin"...
Grant "sybase_ts_role" role to "DR_admin"...
Setup administrator user...Success
Setup Backup server allow hosts
Backup server on "site1" site: Add host "mo-
bf1dc68822.mo.sap.corp" to allow dump and load...
Setup Backup server allow hosts...Success
Setup complete on "site1" site. Please run Setup HADR on "site2" site to
complete the setup.
setup_site=<companion_site>
is_secondary_site_setup=true
./ASE-16_0/bin/setuphadr setup_SJHADR.rs
Setup user databases
Set "pubs2" database "trunc log on chkpt" option to "false"...
Setup user databases...Success
If you already have scripts for creating the devices and databases for the primary companion, use those scripts
to configure the companion server. Otherwise you can use the sp_helpdb, sp_helpdevice, sp_helpsort,
and ddlgen system procedures to make sure the companion server perfectly mimics the device and database
make up of the primary companion.
sp_helpdevice
go
device_name physical_name
description
status cntrltype vdevno vpn_low vpn_high
----------- -----------------------------------
---------------------------------------------------------------------------------
------------------------ ------ --------- ------ ------- --------
master /work/SAP1/data/master.dat file system device, special,
dsync on, directio off, default disk, physical disk, 52.00 MB, Free: 8.00
MB 3 0 0 0 26623
salesdev1 /work/SAP1/data/salesdev1.dat file system device, special,
dsync off, directio on, physical disk, 15.00 MB, Free: 5.00
MB 2 0 5 0 7679
salesdev2 /work/SAP1/data/salesdev2.dat file system device, special,
dsync off, directio on, physical disk, 20.00 MB, Free: 0.00
MB 2 0 7 0 10239
saleslog1 /work/SAP1/data/saleslog1.dat file system device, special,
dsync off, directio on, physical disk, 10.00 MB, Free: 2.00
MB 2 0 6 0 5119
sybmgmtdev /work/SAP1/data/sybmgmtdb.dat file system device, special,
dsync off, directio on, physical disk, 76.00 MB, Free: 0.00
MB 2 0 4 0 38911
sysprocsdev /work/SAP1/data/sysprocs.dat file system device, special,
dsync off, directio on, physical disk, 196.00 MB, Free: 0.00
MB 2 0 1 0 100351
systemdbdev /work/SAP1/data/sybsysdb.dat file system device, special,
dsync off, directio on, physical disk, 6.00 MB, Free: 0.00
MB 2 0 2 0 3071
tapedump1 /dev/nst0 unknown device type, disk, dump
device
16 2 0 0 20000
The following shows that server SFSAP1 has the sales database installed:
sp_helpdb
go
name db_size owner dbid created durability lobcomplvl
inrowlen
status
sp_helpdb sales
go
name db_size owner dbid created durability lobcomplvl inrowlen
status
----- ------------- ----- ---- ------------ ---------- ---------- --------
--------------
sales 38.0 MB sa 4 Jan 19, 2016 full 0 NULL no
options set
(1 row affected)
---------------------------------------------------------------------------------
-----------------------------
log only free kbytes =
8128
If your site does not use scripts to configure the databases and devices, you can use the ddlgen utility to
populate the setup_hadr.res file, which you will use to configure the companion server. For example, this
displays the object definitions for the pubs2 database:
The codeblock below illustrates a sample setup_hadr.rs response file based on the primary server, as
described in the installation chapters of this user guide (changed responses are in bold).
###############################################################################
# Setup HADR sample responses file
#
# This sample response file sets-up SAP ASE HADR on
# hosts "host1" (primary) and "host2" (companion).
#
# Prerequisite :
# - New SAP ASE and Backup servers are already setup and started on "host1" and
"host2".
# See HADR User Guide for requirements on SAP ASE servers.
# - Replication Management Agent (RMA) is already started on "host1" and "host2".
#
# Usage :
# 1. On host1 (primary), run:
# $SYBASE/$SYBASE_ASE/bin/setuphadr <this_responses_file>
#
# 2. Change this responses file properties:
# setup_site=site2
# is_secondary_site_setup=true
#
# 3. On host2 (companion), run:
# $SYBASE/$SYBASE_ASE/bin/setuphadr <responses_file_from_step_2>
#
###############################################################################
# ID that identifies this cluster
#
# Value must be unique,
# begin with a letter and
# 3 characters in length.
# If XA replication is enabled
#
# Valid values: true, false
xa_replication=false
#If need to disable the checks for reference constraints
#
#Valid values: true, false
disable_referential_constraints=false
# Databases that will participate in replication
# and "auto" materialize.
#
# If database doesn't exist in the SAP ASE, you need
# to specify <site>.ase_data_device_create_[x]_[y] and
# <site>.ase_log_device_create_[x]_[y] properties.
# See below.
#
# ASE HADR requires SAP ASE to have a database
# with cluster ID name (see "cluster_id" above).
# If you have not created this database, you can
# enter it here to have it created.
# cluster ID database
participating_database_1=AS1
materialize_participating_database_1=true
# user database
participating_database_2=pubs2
materialize_participating_database_2=true
# user database
# participating_database_3=userdb2
# materialize_participating_database_3=true
# Enable SSL - true or false
enable_ssl=true
# Name and location of the Root CA certificate. If you are using a self-signed
certificate, put your public key file here
ssl_ca_cert_file=/tmp/rootCA.pem
You can install the Fault Manager using the GUI installer or from the command line.
The Fault Manager is located on a machine that is separate from the machine hosting the primary and
secondary companions, and uses a separate installer.
Prerequisites
Note
The Fault Manager installer response file is automatically generated when you complete the HADR
configuration on the second site. The response file is located in $SYBASE/log/
fault_manager_responses.txt. Use this syntax to use this response file to install Fault Manager on
third host:
<ASE_installer_directory>/FaultManager/setup.bin -f
<fault_manager_responses.txt>
Note
The steps below describe the installation process using the SAP installer. Often, it is much easier to use a
response file to install the Fault Manager because many of the values are automatically filled out when you
install and configure the servers (for example, primary and standby hosts, primary and standby SAP ASE
directories and port numbers, primary and standby RMA hosts and port numbers, SAP ASE cockpit hosts
and port numbers, and so on). See Using setup.bin or setupConsole.exe with a Response File [page 66] for
information about installing and configuring the Fault Manager with a response file.
Keep in mind:
● The SAP ASE user provided during the Fault Manager installation must have the sa_role, replication_role
and mon_role. Grant these roles by logging into the primary server with isql and issuing (this example
grants the roles to a user named fmuser):
● On the HP platform, the Fault Manager requires the C++ libCsup11.so.1 library.
● On Linux, the Fault Manager requires GLIBC version 2.7 or later.
● The node running the Fault Manager must use the same platform as the HADR system nodes (however, it
need not have the same operating system version).
● Set the number of file descriptors to 4096 (or higher) to start the Fault Manager. The default value for
many systems is 1024. To determine the number of file descriptors to which your system is set, enter:
○ On the C-shell:
limit descriptors
ulimit -n
ulimit -n 4096
./setup.bin -f <fault_manager_responses.txt>
If you did not generate a response file, you can edit the sample response file. See Using setup.bin or
setupConsole.exe with a Response File [page 66].
./setup.bin
3. In the End-user License Agreement screen, select the geographic location, and agree to the license terms.
Click Next.
6. When the Fault Manager has installed, you see the Configure Fault Manager screen. Click Yes to configure
the Fault Manager. If you click No, you can manually configure the Fault Manager later time using sybdbfm
utility. See Configuring the Fault Manager from the Command Line [page 142].
Option Description
SAP ASE host name Host name of the primary SAP ASE.
SAP ASE installed directory Full path to the Adaptive Server release directory for the primary companion
($SYBASE on UNIX, %SYBASE% on Windows).
SAP ASE installed user ID of the user who installed the primary companion.
Option Description
SAP ASE host name Host name of the companion SAP ASE.
SAP ASE installed directory Full path to the SAP ASE release directory for the secondary companion
($SYBASE on UNIX, %SYBASE% on Windows).
SAP ASE installed user ID of the user who installed the secondary companion.
Option Description
Virtual ASE host name Name of the virtual host running SAP ASE.
Internet protocol version Select the Internet protocol from drop-down list.
Option Description
$SYBASE/FaultManager/bin/sybdbm status
The Fault Manager enters a bootstrap mode when you start it using the hadm command.
During the bootstrap mode, the Fault Manager requires that both SAP ASE nodes are running and configured
for HADR, performs various operations, and checks on both nodes. Once the bootstrap is complete, Fault
Manager continuously monitors the HADR system, and generates sanity reports until you stop the Fault
Manager.
This scenario illustrates how the Fault Manager reacts during a failover. The scenario assumes you have two
sites: London, which contains the primary server, and Paris, which contains the companion server. HADR is
running normally, and replication is in synchronous mode.
If the London site becomes unavailable unexpectedly, and the Fault Manager ha/syb/use_cockpit profile
parameter is set to 1, the Fault Manager sends this message to the SAP ASE Cockpit:
If the Fault Manager ha/syb/failover_if_insync profile parameter is set to 1, Fault Manager automatically
triggers an unplanned automatic failover to the companion server, Paris. When failover is initiated, the Fault
Manager sends this message to the SAP ASE Cockpit:
When the failover is complete, Fault Manager send this message to the SAP ASE Cockpit:
When the Paris companion server becomes the new primary companion, the Fault Manager sends this
message to the SAP ASE Cockpit:
The Fault Manager should be able to contact the London site when it becomes available. When contact is
restored, the Fault Manager sends this message to the SAP ASE Cockpit:
Although the Fault Manager should recognize the London site as a companion server, it is not yet available in
the HADR system because replication is not yet restored. In this situation, the Fault Manager sends one of
these messages to the SAP ASE cockpit:
Replication is IN DOUBT
Replication is DOWN
Replication is SUSPENDED
You then use the RMA to restore replication and make the London companion site available. If the Fault
Manager ha/syb/set_standby_available_after_failover profile parameter is set to 1, the Fault
Manager makes the host available using the sap_host_available RMA command. However, if the Fault
Manager ha/syb/set_standby_available_after_failover profile parameter is set to 0, you manually
issue the sap_host_available command from RMA.
Regardless of how it is issued, the sap_host_available command restores the replication in synchronous
mode and completes the HADR restoration.
If a network issue occurs while the primary server is disconnected from the network, the heartbeat client
running on the primary host deactivates the primary site, and the Fault Manager promotes the standby node to
the primary server in Active mode.
Not all profile parameters appear in the profile file after you configure the Fault Manager. The Fault Manager
uses default values for any parameters you do not specify.
Use the profile parameters in the subsequent sections to customize the Fault Manager, adding the appropriate
setting to the profile file. Restart the Fault Manager anytime you add or change a profile parameter. See
Administering the Fault Manager [page 145] for information about starting the Fault Manager.
The Fault Manager includes numerous profile parameters, which are set in the Fault Manager profile file, and
are similar to configuration parameters for SAP ASE.
The profile file is initially populated with defaults, but users can modify these defaults to match their site's
configuration.
Note
Restart the Fault Manager after adding or modifying any profile parameter. Use $SYBASE/
FaultManager/bin/sybdbm restart to restart the Fault Manager.
ha/syb/version 1 1
ha/syb/dbfmhost The host on which IP address or hostname The host on which the Fault
you loaded the Fault Manager runs.
Manager binary
ha/syb/non_bs 1 Set to 1 for a Custom Ap Indicates the type of HADR in
plication HADR system. stallation.
Note
Do not change this parame
ter.
ha/syb/exedir The current working Any valid directory path Indicates the directory from
directory in which which you run the Fault Man
you run the ./ ager.
sybdbfm installa
tion
ha/syb/h2hport 13797 Any valid unused port Describes the port the heart
number beat client uses to hear commu
nication.
There are a number of parameters that affect the Fault Manager configuration
There are a number of parameters that affect the primary site's configuration.
Possible Values /
Parameter Name Default Value Range Description
ha/syb/primary_dbhost Host that runs Fault Can take IP Address or Host where primary SAP ASE will
Manager. hostname. run.
ha/syb/primary_dbport 4901 Any valid unused port Port on which the primary SAP
number. ASE runs.
ha/syb/ Host that runs Fault Can take IP address or Host on which the primary SAP
primary_cockpit_host Manager. hostname. ASE is installed.
ha/syb/ 4998 Any valid unused port Port where the Fault Manager
primary_cockpit_port number set during SAP connects to SAP ASE Cockpit to
ASE installation. pass event messages.
ha/syb/primary_site No default, blank. Any valid logical site Logical site name given in RMA
name that can be given for this HADR system node.
in RMA to this HADR
system node.
ha/syb/primary_hbport 13777 Any valid unused port Port from which the primary site
number. (The primary heartbeat client makes contact.
Fault Manager heart
beat port number and
the standby Fault Man
ager heartbeat port
number cannot be the
same.)
ha/syb/primary_dr_host Host that runs Fault Can take IP address or Host from which the primary site
Manager. hostname. RMA runs.
ha/syb/primary_dr_port 4909 Any valid unused port Port from which the primary node
number. RMA runs.
Note
The HADR installation uses a
default port number of 7001
for the Custom Application,
but a default port number of
4909 for the Fault Manager.
ha/syb/primary/ No default; blank. Any valid SAP ASE SAP ASE server name.
ase_instance_name server name.
ha/syb/primary/ /sybase/ Any valid directory path Location where SAP ASE has
ase_instance_path to the SAP ASE installa been installed.
tion directory.
ha/syb/primary/ syb Any valid user with ap (UNIX) Username who installed
ase_instance_user propriate permissions.
SAP ASE, or the login with the ap
propriate permissions to access
the SAP ASE installation.
There are a number of parameters that affect the companion site's configuration.
ha/syb/standby_dbhost Host that runs Fault Man Can take IP address or Host on which the standby SAP
ager. hostname. ASE runs.
ha/syb/standby_dbport 4901 Any valid unused port Port on which the standby SAP
number. ASE runs.
ha/syb/ Host that runs Fault Man Can take IP address or Host where standby SAP ASE is
standby_cockpit_host ager. hostname.
installed. SAP ASE Cockpit
must know the install directory
to run.
ha/syb/ 4998 Any valid unused port Port on which the Fault Manager
standby_cockpit_port number set during SAP connects to SAP ASE Cockpit to
ASE installation. pass event messages to it.
ha/syb/standby_site No default; blank. Any valid logical site name Logical site name given in RMA
that can be given in RMA for this HADR system node.
to this HADR system
node.
ha/syb/standby_hbport 13787 Any valid unused port Port from which the standby
number. (The standby heartbeat client makes contact.
Fault Manager heartbeat
port number and the pri
mary Fault Manager
heartbeat port number
cannot be the same)
ha/syb/standby_dr_host Host that runs Fault Man Can take IP address or Host from which the standby
ager. hostname. RMA runs.
ha/syb/standby_dr_port 4909 Any valid unused port Port from which the RMA runs
number. on the standby node.
Note
The HADR installation uses
a default port number of
7001 for the Custom Appli
cation, but a default port
number of 4909 for the
Fault Manager.
ha/syb/standby/ No default; blank. Any valid SAP ASE server SAP ASE server name.
ase_instance_name name.
ha/syb/standby/ /sybase/ Any valid directory path to Location where SAP ASE was in
ase_instance_path the SAP ASE installation stalled.
directory.
ha/syb/standby/ syb Any valid user with appro (UNIX) Username who installed
ase_instance_user priate permissions.
SAP ASE, or the login with the
appropriate permissions to ac
cess the SAP ASE installation.
There are a number of parameters that affect the frequency of the Fault Manager's communication checks.
Possible Values /
Parameter Name Default Value Range Description
ha/syb/check_frequency 3 (seconds) Any positive numeric The unit of frequency upon which the
value other units are based.
ha/syb/ 10 (10 units of Any positive numeric Frequency of the standby database
standby_ping_frequency check_frequ value shallow probe.
ency. That is,
30 seconds)
ha/syb/ 100 (100 units of Any positive numeric Frequency of the primary database
primary_hostctrl_status_f check_frequ value deep probe.
requency ency; that is,
300 seconds)
ha/syb/ 100 (100 units of Any positive numeric Frequency of the standby database
standby_hostctrl_status_f check_frequ value deep probe.
requency ency; that is,
300 seconds)
ha/syb/ 100 (100 units of Any positive numeric Frequency with which the status of the
report_status_frequency check_frequ value components is reported.
ency; that is,
300 seconds)
ha/syb/ 100 (100 units of Any positive numeric Frequency of deep probe to receive rep
replication_status_check_ check_frequ value lication status.
frequency ency; that is,
300 seconds)
There are a number of parameters that affect when the Fault Manager's communication times out.
Possible Values /
Parameter Name Default Value Range Description
ha/syb/ Max integer Any positive nu Timeout period for attempts to start
start_database_timeout meric value the primary and companion data
bases.
ha/syb/ 60 seconds Any positive nu Timeout period for attempts to stop
stop_database_timeout meric value the primary and companion data
bases.
ha/syb/failover_timeout 10 minutes Any positive nu Timeout period for attempts for fail
meric value over.
ha/syb/ 180 seconds Any positive nu Timeout period to receive replication
replication_status_timeout meric value status.
ha/syb/ 3 seconds Any positive nu Timeout period for the shallow probe
odbc_connect_timeout meric value connect.
ha/syb/ 3 seconds Any positive nu Timeout period for the shallow probe
odbc_command_timeout meric value execution.
ha/syb/ 60 seconds Any positive nu Timeout period for uploading the
upload_executable_timeout meric value heartbeat client to the primary or
companion host.
ha/syb/hb_fm_timeout 2.5 seconds Any positive nu Timeout period for the heartbeat cli
meric value ent to determine if connection to
Fault Manager is lost.
ha/syb/hb_hb_timeout 2.5 seconds Any positive nu Timeout period for any heartbeat cli
meric value ent to determine if connection to the
other heartbeat client is lost.
ha/syb/ 10 seconds Any positive nu Timeout period for the heartbeat cli
hb_set_db_inactive_timeout meric value ent to set the SAP ASE database to
"inactive."
ha/syb/hb_kill_db_timeout 10 seconds Any positive nu Timeout period for the heartbeat cli
meric value ent to kill the SAP ASE database.
There are a number of parameters that affect the actions the Fault Manager performs.
There are a number of parameters that affect the Fault Manager's virtual and floating IP address.
Possible Values /
Parameter Name Default Value Range Description
ha/syb/vdbport 4901 Any valid unused port Port number for the floating IP.
number
ha/syb/vdb_interface Network interface set Network interfaces Option to set network interface.
by database available
Upgrade the Fault Manager by performing a binary overlay of the existing Fault Manager.
Procedure
<Fault_Manager_install_dir>/FaultManager/bin/sybdbfm stop
<ASE_installer_directory>/FaultManager/setup.bin
3. Select the geographic location, and agree to the license terms. Click Next.
4. In the Choose Install Folder screen, enter the same installation path as the previous version of the Fault
Manager.
5. Review the installation summary. Click Previous to make changes: Click Install for the installer to unload the
files to the disk. The installer unloads the files.
<Fault_Manager_install_dir>/FaultManager/sybdbfm_<CID>
The Fault Manager monitors the health of the primary and standby servers, and triggers a failover if the primary
server or host fails, and the HADR system is running in synchronous mode.
The Fault Manager is a standalone component that runs on a third node, preferably where the application
server in running, and on the same platform as the HADR system nodes.
The Fault Manager functions in two modes: the Fault Manager mode and as the heartbeat client mode. The
Fault Manager runs on a third host. In Fault Manager mode, it monitors SAP ASE, Replication Server, performs
functions like initiating failover, and restarting the server, and acts as the server for the heartbeats that it
receives from the heartbeat clients.
The Fault Manager heartbeat client mode runs on primary and standby hosts. In heartbeat client mode, the
Fault Manager sends a heartbeat to the Fault Manager, checks for heartbeats from fellow heartbeat clients, and
sends its own heartbeat to them (primarily to avoid a split-brain situation). If the heartbeat client on the
primary host loses a connection with the Fault Manager and the fellow heartbeat client, the Fault Manager
triggers a deactivation of the primary server. If the deactivation fails, the Fault Manager kills the SAP ASE
process.
● Triggers a failover using saphostctrl if the primary server is down or if the primary node is down or
unreachable, and the standby server is healthy and synchronously replicated.
● Restarts the primary server if it is down and replication is asynchronous.
Note
Stop or hibernate the Fault Manager when you perform any maintenance activity on SAP ASE or other
components in the HADR system. Once hibernated, the Fault Manager process continues to run but will not
monitor the database, and no failover occurs. The heartbeat processes are stopped during hibernation.
Primary server is unreachable (network glitch or SAP ASE Failover to the companion if:
unresponsive).
● SAP ASE is unreachable because it has become unre
sponsive and the failover_if_unresponsive
parameter is set in the Fault Manager configuration file.
● There is a network glitch.
Primary server reports an error condition. If client login and data access are unaffected, no action is
taken. Fault Manager does not scan the SAP ASE log for er
rors.
SAP ASE, Replication Server, or RMA on the companion host Restart these components if the corresponding parameters
are down.
in FM profile are set (for example,
chk_restart_repserver).
Fault Manager is down. For the Custom App version of HADR, manually restart the
Fault Manager (see Administering the Fault Manager [page
145]). See the Business Suite documentation for instruc
tions on restarting the Fault Manager.
Fault Manager is unreachable from the 2 sites (primary and Automatic failover is disabled since the Fault Manager can
companion).
not reach the primary and companion sites. If the primary
SAP ASE goes down in this situation, manual intervention is
required to initiate failover.
Heartbeat from primary is missed for a preconfigured time Fault manager keeps trying to restart heartbeat on primary
out. site. Fault manager status shows DB host status as
UNUSABLE for the primary host.
Heartbeat from companion is missed for a preconfigured Fault manager keeps trying to restart heartbeat on compan
timeout. ion site. Fault manager status shows DB host status as
UNUSABLE for the companion host.
Companion SAP ASE is down. Notify cockpit, attempt to restart companion SAP ASE if
ha/syb/allow_restart_companion=1 is included
in the profile file.
Companion host is down. Replication is turned off. Notify cockpit. May require manual
intervention to restart the host and other components.
Companion SAP ASE is unreachable. Notify cockpit and attempt to restart companion SAP ASE if
the corresponding Fault Manager parameter is set.
Replication Server or the RMA on the primary host are re Notify cockpit.
stored.
Although you should use the SAP installer to install and configure it, you can use the sybdbfm to configure and
run the Fault Manager. sybdbfm is located in $SYBASE/FaultManager/bin.
Syntax
sybdbfm [<options>]
hadm pf=<SYBHA.PFL> : start ha process.
hahb pf=<SYBHA.PFL> : start heartbeat.
install [pf=<SYBHA_INST.PFL>] : install.
uninstall [pf=<SYBHA.PFL>] : uninstall.
check pf=<SYBHA_INST.PFL> : check.
Parameters
● hadm pf=<full_path_to/SYBHA.pfl> – Starts the Fault Manager process. Requires a profile file.
● hahb pf=<full_path_to/SYBHA.pfl> – Starts the heartbeat process. Requires a profile file. The Fault
Manager (running in monitoring mode) internally starts the process in heartbeat mode on the hosts
running SAP ASE.
● install[pf=<full_path_to/SYBHA.pfl>] – Configures the Fault Manager. You can use this
parameter:
○ In an interactive mode, during which you provide information to command line prompts to install the
Fault Manager.
○ With a profile file, which is provided by the pf=<full_path_to/SYBHA.pfl> parameter. The profile
contains the details you would provide at interactive command prompt.
● uninstall[pf=<full_path_to/SYBHA.pfl>] – Uninstalls the Fault Manager. You can use this
parameter:
○ In an interactive mode, during which you provide information to command line prompts to install the
Fault Manager.
○ With a profile file, which is provided by the pf=<full_path_to/SYBHA.pfl> parameter. The profile
contains the details you would provide at interactive command prompt.
● check pf=<full_path_to/SYBHA.pfl> – Performs the basic bootstrapping of the Fault Manager to
confirm if the details provided in the profile file and the user credentials in the SecureStore file are correct
and that the Fault Manager can run.
● hibernate – Hibernates the Fault Manager. Useful for pausing the Fault Manager for planned
maintenance activities on the HADR components (for example, planned failover, upgrades, and so on).
● resume – Resumes running the Fault Manager from the hibernating state.
● restart – Restarts the Fault Manager using an altered profile file. Used when any profile parameter is
modified or added.
● stop – Stops the Fault Manager and all heartbeat processes spawned on the HADR system nodes. Execute
stop from the same directory from which you started the Fault Manager.
● status – Displays the status of the Fault Manager.
● version – Displays version information along with other build and operating system version information
supported by the Fault Manager.
The Fault Manager requires username and password combinations to connect to SAP ASE, RMA, SAP ASE
Cockpit, and SAP Host Agent. These usernames and passwords are stored in an encrypted format in the
SecureStore.
During configuration, the Fault Manager adds usernames and passwords for the following users in the
SecureStore:
● SADB_USER – SAP ASE user with the sa_role and replication_role roles.
● DR_USER – RMA user, used for connecting to RMA.
● SAPADM_USER – Operating system user, mostly used for sapadm for SAP HostAgent.
● SCC_USER – Administration user, mostly used for sccadmin.
Use the rsecssfx utility to perform this administration duty for SecureStore. Update any changed usernames
and passwords in SecureStore. To do so, stop the Fault Manager, update the SecureStore using the rsecssfx
utility, and restart the Fault Manager. Stop the Fault Manager while the password is changed in the cluster
components.
Note
Source the $SYBASE/SYBASE.csh (SYBASE.sh for the Korn shell) file to configure the environment
variables.
● Use the put parameter to add or update entries in the SecureStore. The syntax is:
● Use the list parameter to list entries in the SecureStore. For example:
./FaultManager/bin/rsecssfx list
|------------------------------------------------------------------------|
| Record Key | Status | Time Stamp of Last Update |
|------------------------------------------------------------------------|
Note
Use the sybdbfm utility to view the status of the Fault Manager. For example:
$ sybdbfm status
fault manager running, pid = 17763, fault manager overall status = OK, currently
executing in mode PAUSING
*** sanity check report (1)***.
node 1: server star1, site hasite0.
db host status: OK.
db status OK hadr status PRIMARY.
node 2: server star2, site hasite1.
db host status: OK.
db status OK hadr status STANDBY.
replication status: SYNC_OK.
Edit the Fault Manager profile file to change any parameter. The profile file is named SYBHA.PFL, and is located
in the install directory of the Fault Manager on all platforms. Restart the Fault Manager for the profile parameter
changes to take effect.
You should continuously monitor the Fault Manager log (named dev_sybdbfm, and located in
<Fault_Manager_install_directory>/FaultManager).
Note
If a problem related to Fault Manager or the heartbeat requires you to consult SAP, back up the following
data when the problem occurs:
● Fault Manager data available on the host running Fault Manager ($SYBASE below is the directory where
Fault Manager is installed):
How you uninstall the Fault Manager depends on whether you installed it using the SAP installer or the
sybdbfm utility.
sybdbfm stop
2. Remove SecureStore-related files by issuing this from the directory that contains SYBHA.PFL:
./uninstall
The HADR feature allows SAP ASE applications to operate with zero down time while you are updating the SAP
ASE software.
Complete the upgrade steps in a single sequence: partial upgrade is not supported (for example, you cannot
upgrade some components now and then upgrade the other components at another time). Replication is
suspended during some steps of a rolling upgrade, and if you perform a partial upgrade, logs continue to grow,
which can result in logs or the SPQ running out of space. During a rolling upgrade, the versions between SAP
ASE and Replication Server need not match.
The RUN_rs instance name.sh Replication Server runserver file is regenerated during an upgrade, and any
user changes to this file are lost. If your site requires these changes, edit the runserver file after the upgrade is
complete then restart Replication Server to make the environment settings take effect.
Note
Before upgrading HADR with SAP Business Suite on SAP ASE, you may have to follow instructions from
your application vendors over the general guidelines in this chapter. See SAP note 2808173 for more
details.
In this topology, the primary server (ASE1) is installed on the same host as the inactive Replication Server
(SRS1). The active Replication Server (SRS2) is installed on a remote host, along with the standby server
(ASE2). Data changes that occur in ASE1 are sent by the Replication Agent thread to the active SRS2 running
on the remote host. The active SRS2 then routes these changes to ASE2, which is running on the same host as
the active Replication Server, SRS2. In this setup, the inactive Replication Server, SRS1, is not involved in data
movement until failover occurs. The communication among ASE1, SRS1, and ASE2 is through a client interface
(stream replication, indicated in this topology as "CI").
Run this command to determine which SAP ASE you are connected to in the HADR system:
select asehostname()
In this configuration, all components are running, and the standby server is almost in sync with the primary
server. Prior to upgrade, site1 is the primary server and site2 is the companion server (in high-availability – HA
– mode, the companion server is referred to as the standby server) with remote replication topology. The
Replication Server versions prior the upgrade are compatible with the Replication Server versions after the
upgrade. If you upgrade from a "1-Off" release, you can upgrade only the SAP ASE or Replication Servers.
Note
Stop the Fault Manager before you perform a rolling upgrade (even if you are performing planned activities
like a planned failover). You can start the Fault Manager after the upgrade is complete. To stop the Fault
Manager, issue this from the <installation_directory>/FaultManager directory:
<Fault_Manager_install_dir>/FaultManager/bin/sybdbfm stop
To perform a rolling upgrade, you first upgrade SRS1 on site1 to a higher version:
shutdown
4. Remove the RMA service: On Windows, execute the following command from either the %SYBASE%
\RMA-16_0\compatibility\WinService\Win32\Release directory, or the %SYBASE%
\RMA-16_0\compatibility\WinService\x64\Release directory, to remove the RMA service –
install_directory/setup.bin
6. In the Choose Install Folder screen, enter the current SAP ASE SAP installation directory, then click Next:
7. In the Choose Update Installation screen, determine if you want the installer to select and apply updates,
then select Update only the Data Movement component in rolling upgrade.
The SAP installer must complete the software update before you continue to the next step.
8. Install a new RMA service on Windows: To install, and then start the new RMA service on Windows, execute
the following command from either the %SYBASE%\RMA-16_0\compatibility\WinService
\Win32\Release directory, or the %SYBASE%\RMA-16_0\compatibility\WinService
\x64\Release directory –
10. Log in to RMA on site1 as the DR_admin user and issue sap_upgrade_server to finish the upgrade for
Replication Server on site1:
At this point of the upgrade process, the HADR system is working normally with ASE1, SRS2, ASE2 at
the older versions, and SRS1 at newer version.
11. Log into RMA on site1 as the DR_admin user and issue:
This command allows a 30-second grace period for any running transactions to complete before the
deactivation starts. Failover will not succeed if there are still active transactions after 30 seconds. If this
occurs, retry the command when the system is not busy, use a longer grace period, or use the force
option to terminate the client connection (if it is safe) with:
12. The sap_failover command may take a long time to finish. To check the status of the sap_failover
command, issue this from the RMA:
sap_status task
13. Once the sap_status command returns Completed, resume replication by issuing this from the RMA:
sap_host_available <site1_site_name>
14. Verify that Replication Server is not running any isql processes during the Replication Server installation
step below. If there are isql processes running, Replication Server issues an error message stating "isql
text file busy".
15. Login to RMA on site2 as the DR_admin user and issue sap_upgrade_server to start the upgrade for
Replication Server on site2:
shutdown
17. Remove the RMA service: On Windows, execute the following command from either the %SYBASE%
\RMA-16_0\compatibility\WinService\Win32\Release directory, or the %SYBASE%
\RMA-16_0\compatibility\WinService\x64\Release directory, to remove the RMA service –
<install_directory>/setup.bin
19. In the Choose Install Folder screen, enter the current ASE SAP installation directory.
20.In the Choose Update Installation screen, determine if you want the installer to select and apply updates,
then select Update only the Data Movement component in rolling upgrade.
The SAP installer must complete the software update before you continue to the next step.
21. Install a new RMA service on Windows: To install, and then start the new RMA service on Windows, execute
the following command from either the %SYBASE%\RMA-16_0\compatibility\WinService
\Win32\Release directory, or the %SYBASE%\RMA-16_0\compatibility\WinService
\x64\Release directory –
22. After the SAP installer has finished the upgrade, start RMA:
○ (UNIX) – $SYBASE/$SYBASE_ASE/bin/rma
○ (Windows) – start the RMA Windows service by either of the following:
○ Starting Sybase DR Agent - <cluster_ID> from the Services panel
○ Issuing this command, where <cluster_ID> is the ID of the cluster:
23. Log into RMA on site2 as the DR_admin user and issue sap_upgrade_server to finish the upgrade for
Replication Server on site2:
At this point of the upgrade process, the HADR system is working normally with ASE1 and ASE2 at the
older versions, and SRS1 and SRS2 at the newer version.
24. (Skip this step if you do not lock the sa user) If the sa user is locked, temporarily unlock this user on ASE1
during the upgrade process by logging in as the user with SSO permission on ASE1 and issuing:
25. Log into RMA on site1 as the DR_admin user and issue sap_upgrade_server to start the upgrade for
SAP ASE on site1:
shutdown SYB_BACKUP
go
shutdown
go
27. Shut down SAP ASE Cockpit. If the SAP ASE Cockpit is running:
○ In the foreground – At the cockpit> prompt, execute:
shutdown
Note
Do not enter shutdown at a UNIX prompt: Doing so shuts down the operating system.
$SYBASE/COCKPIT-4/bin/cockpit.sh --stop
<install_directory>/setup.bin
29. In the Choose Install Folder screen, enter the current ASE SAP installation directory.
Note
The SAP installer must complete the software update before you continue to the next step.
31. After the SAP installer has completed the upgrade, use the updatease utility to upgrade SAP ASE, which
runs installmaster and performs other tasks to bring SAP ASE up to date. updatease is available as a
GUI display or a command line tool.
Note
You need not perform this step if you update the SAP ASE server instance in the SAP installer.
./updatease
./updatease
Server: SFSAP1
ASE Password:
Updating SAP Adaptive Server Enterprise 'SFSAP1'...
Running installmaster script...
installmaster: 10% complete.
installmaster: 20% complete.
installmaster: 30% complete.
$SYBASE/ASE-16_0/install/RUN_<backupserver_name>
$SYBASE/COCKPIT-4/bin/cockpit.sh
○ In the background:
From the Bourne shell (sh) or Bash, issue:
34. Log into RMA on site1 as the DR_admin user and issue sap_upgrade_server to complete the upgrade
for SAP ASE on site1:
35. (Skip this step if you do not lock the sa login) Log in to ASE1 as a user with sapsso permission and issue
this once the upgrade is complete:
At this point of the upgrade process, the HADR system is working normally, with ASE2 at the older
versions and SRS1, SRS2, and ASE1 at the newer versions. The topology looks like:
36. Log into RMA on site2 as the DR_admin user and issue:
This command allows a 30-second grace period for any running transactions to complete before the
deactivation starts. Failover will not succeed if there are still active transactions after 30 seconds. If this
occurs, retry the command when the system is not busy, use a longer grace period, or use the force
option to terminate the client connection (if it is safe) with:
37. The sap_failover command may take a long time to finish. To check the status of the sap_failover
command, issue this from the RMA:
sap_status task
38. Once the sap_status command returns Completed, resume replication from ASE1 to SRS2, but not to
ASE2, by issuing this from the RMA (suspend ensures that no ticket is sent to verify the replication status
during the upgrade process):
The replication path from ASE1 to SRS2 is restored and all commits on ASE1 are synchronously committed
on SRS2, meaning there is zero data loss if site1 is lost during this time. Because replication to ASE2 is not
restored, the log records in SRS2 are not applied to ASE2. Make sure you have sufficient storage
configured in the SPQ for the SRS2 for the short period of time until the upgrade process completes.
39. (Skip this step if you do not lock the sa login) Temporarily unlock the sa user on ASE2 for upgrade. Login as
the user with sapsso permission on ASE2 and issue:
40.Log into RMA on site2 as the DR_admin user and issue sap_upgrade_server to start the upgrade for
SAP ASE on site2 (suspend ensures that no ticket is sent to verify the replication status during the
upgrade process):
shutdown SYB_BACKUP
go
shutdown
go
shutdown
Note
Do not enter shutdown at a UNIX prompt; it shuts down the operating system.
$SYBASE/COCKPIT-4/bin/cockpit.sh --stop
install_directory/setup.bin
44.Enter the current ASE SAP installation directory in the Choose Install Folder screen.
45. In the Choose Update Installation screen, determine if you want the installer to select and apply updates,
then select Update only the SAP ASE component in rolling upgrade:
The SAP installer must complete the software update before you continue to the next step.
46. After the SAP installer has completed the upgrade, use the updatease utility to upgrade SAP ASE, which
runs installmaster and performs other tasks to bring SAP ASE up to date. See the updatease
instructions in the earlier step.
47. Start Backup Server from the command line by issuing:
$SYBASE/ASE-16_0/install/RUN_<backupserver_name>
$SYBASE/COCKPIT-4/bin/cockpit.sh
○ In the background:
From the Bourne shell (sh) or Bash, issue:
49. Log into RMA on site2 as the DR_admin user and issue sap_upgrade_server to complete the upgrade
for SAP ASE on site2:
50.(Skip this step if you do not lock the sa login) Log in to ASE2 as the user with sapsso permissions and issue
this once the upgrade is complete:
Note
At this point of the upgrade process, the HADR system is working normally, with SRS1, SRS2, ASE1,
and ASE2 at the newer versions. The topology looks like:
<installation_directory>/FaultManager/setup.bin
52. Choose the existing ASE-installed directory on the Choose Install Folder screen. Do not choose to configure
Fault Manager in the installer.
53. Set the environment variables, and source SYBASE.sh for the Bourne shell:
source <installation_directory>/SYBASE.csh
<Fault_Manager_install_dir>/FaultManager/sybdbfm_<CID>
These sections describe how to upgrade an SAP ASE version 15.7 disaster recovery (DR) solution to ASE 16.0
HADR solution in a Business Suite or a Custom Application environment.
● These steps require an application downtime, the length of which depends on the size of the databases you
are upgrading.
● If you do not follow the upgrade steps precisely as documented and in the proper sequence, you will lose
data.
The SAPHOSTAGENT<SP-version>.SAR archive contains all of the required elements for centrally monitoring
any host. It is available for all operating system platforms supported by SAP
Context
The SAP Host Agent is automatically installed during the installation of SAP systems or instances with SAP
kernel 7.20 or higher.
Procedure
3. Choose Installation and Upgrades By Alphabetical Index (A-Z) H SAP Host Agent SAP Host
Agent 7.21 > Select highest available version.
4. Select the appropriate SAPHOSTAGENT<SP-version>.SAR archive from the Download tab.
Recommendation
Always select the highest SP version of the SAPHOSTAGENT<SP-version>.SAR archive, even if you
want to monitor a component of SAP NetWeaver with a lower release.
5. Make sure that the SAPCAR tool is available on the host where you want to install SAP Host Agent.
Windows c:\temp\hostagent
UNIX /tmp/hostagent
Recommendation
You can use the additional parameter -verify to verify the content of the installation package
against the SAP digital signature
Recommendation
You can use the additional parameter -verify to verify the content of the installation package
against the SAP digital signature
10. After the upgrade has finished successfully, you can check the version of the upgraded host agent by
executing the following command from the directory of the SAP Host Agent executables:
UNIX ○ If you are logged on as a user with root authorization, the command is as follows: /usr/sap/
hostctrl/exe/saphostexec -version
○ If you are logged on as a member of the sapsys group, for example <sapsid>adm, the command is
as follows: /usr/sap/hostctrl/exe/hostexecstart -version
Procedure
stopsap r3
4. Check the ticket table on the standby site. Log in with isql and issue:
use master
go
select count(*) from rs_ticket_history
go
––––––––––
3
Verify this is the same ticket history in the replicated database (in this example, TIA):
use TIA
go
select count(*) from rs_ticket_history
go
––––––––-
3
5. Log in to the RMA, and check the SAP ASE transaction backlog and the Replication Server queue backlog
using sap_send_trace RMA command (this example runs the command against the PRI database):
sap_send_trace PRI
TASKNAME TYPE VALUE
---------------------- ----------------------------
-----------------------------------
Send Trace Start Time Mon Nov 16 04:42:02 EST 2015
Send Trace Elapsed Time
00:00:00
Send Trace Task Name Send Trace
Send Trace Task State Completed
Send Trace Short Description Send a trace through the
Replication system using rs_ticket
Send Trace Long Description Successfully sent
traces on participating databases.
Send Trace Host Name Big_Host
(7 rows affected)
1> sap_status resource
2> go
Name Type Value
---------------------- ----------------------------
-----------------------------------
Start Time 2016-08-24
15:37:55.327
Elapsed Time 00:00:00
Estimated Failover Time 0
PRI Replication device size (MB) 15360
PRI Replication device usage 112
COM Replication device size (MB) 15360
COM Replication device usage 128
PRI.master ASE transaction log (MB) 300
PRI.master ASE transaction log backlog (MB) 0
PRI.master Replication queue backlog (MB) 0
COM.master Replication queue backlog (MB) 0
PRI.TIA ASE transaction log (MB) 10240
PRI.TIA ASE transaction log backlog (MB) 0
PRI.TIA Replication queue backlog (MB) 0
COM.TIA Replication queue backlog (MB) 0
(15 rows affected)
use master
go
select count(*) from rs_ticket_history
go
––––––––––
4
Verify this is the same ticket history in the replicated database (in this example, TIA):
use TIA
go
select count(*) from rs_ticket_history
go
––––––––-
4
7. Uninstall the primary or the standby Replication Server (RMA internally tears down the entire HADR
system, including the primary and standby Replication Servers and the users and roles, once you issue
sap_teardown). Log into either RMA and issue:
sap_teardown
8. Upgrade SAP ASE to version 16.0. You can upgrade the primary and the standby servers at the same time.
Issue this from the command line:
For example:
9. Install the Data Movement option on the primary and standby servers using the silent install method.
a. Prepare a response file according to the instructions in Installing the HADR System with Response
Files, Console, and Silent Mode [page 66], indicating you are installing only the "SAP ASE Data
Movement for HADR."
b. Log on to the host as user syb<sid>.
c. Execute the response file according to these instructions Installing the HADR System in Silent Mode
[page 78].
10. Unlock the sa user on the primary and standby SAP ASE servers. Perform these steps on both
companions:
a. Log in to the primary SAP ASE database as user sapsso.
b. Issue:
11. Unlock user sa on the primary and standby SAP ASE servers. Perform these steps on both companions:
a. Log in to the primary SAP ASE database as user sapsso.
12. Configure SAP ASE for the HADR environment. Follow the instructions in Installing HADR with an Existing
System [page 85] for configuring the primary and standby servers.
13. Lock user sa on the primary and standby SAP ASE servers. Perform these steps on both companions:
a. Log in to the primary SAP ASE database as user sapsso.
b. Issue:
14. Use the sap_status command to check the Replication Server status after the upgrade. Issue this at the
RMA isql prompt:
sap_status path
You should see this line for the Replication Server Status in the output:
For example:
sap_tune_rs Big_Host, 8, 2
sap_tune_rs Other_Big_Host, 8, 2
startsap r3
When you install HADR in a Business Suite environment, you install SAP ASE, add the Data Movement
component to SAP ASE, and run the setuphadr utility on both primary and standby hosts.
Before installing or upgrading HADR with SAP Business Suite on SAP ASE, you may have to follow instructions
from your application vendors over the general guidelines in this chapter. See SAP note 2808173 for more
details.
● SAP recommends the following sizes for the server resources in an HADR system:
Small 7 4 2 1
Medium 15 8 4 2
Large 25 16 8 4
Extra Large 25 24 16 8
● HADR for Business Suite supports the Fault Manager. See SAP Note 1959660.
● The installation environment requires two hosts: a primary and a standby host.
● The installation release directory requires at least 80 GB of free space.
● Refer to the SAP ASE and Replication Server installation guides at help.sap.com for hardware and
software prerequisite check.
● An HADR system in the Business Suite environment includes the system ID (SID) database (this database
is created as part of the Business Suite installation).
The RMA sap_set <global_level_property> variable includes additional parameters in a Business Suite
environment:
● <sap_sid> – consists of three alphanumeric characters, and denotes the SAP System ID.
● <installation_mode> – specifies the HADR system type. For Business Suite, the mode is BS.
sapinst Use the latest available version, or at a The SAP Installer used to install SAP
minimum,SWPM10SP22_0.SAR. Software, including SAP Business Suite
Kernel The version depends on which sup The the main component of all SAP Ap
ported application version you are in plications. The Kernel contains the exe
stalling. See SAP Note 1554717. cutable files for stating various SAP
processes.
Exports There are a number of supported Busi Contains the tables, code, and transac
ness Suite applications, including Net tions required for SAP Applications. An
Weaver, ERP, CRM, and so on. export media identifies which SAP ap
plication is being installed on the sys
tem
SAP ASE for Business Suite for HADR 16.0 SP04 PL02 PL02. Software for the SAP HADR system
3. Use a file copy tool (such as Filezilla) to copy SAP ASE, the kernel, and exports images to the primary and
companion hosts.
Configuring the HADR on the primary includes installing the Data Management software and running the
setuphadr utility.
Install the Business Suite application using the application installation process
Context
The installation process varies depending on which installation application you use. The steps to install the
Business Suite application depends on which application you use. The following example describes the
NetWeaver installation process.
Procedure
1. Move to the sapinst directory, which as created when the SAPCAR.exe utility extracted files.
2. Execute the sapinst utility to start the SAP installation GUI
3. Select SAP NetWeaver 7.5 SAP ASE SAP Systems SAP Systems Application Server ABAP
Standard System Standard System and click Next.
4. Define the parameters. Select either Typical or Custom parameter mode button (Typical does not display
all input parameters), and click Next.
5. Enter the SAP SID and the destination drive. The SAP SID comprises three alphanumeric characters and
must be unique on your system. The SAP SID becomes the name of the destination directory into which
the NetWeaver software is loaded.
6. Click Next.
7. Enter and confirm the master password and click, then Next.
8. Define the kernel path by choosing Provide the path to installation media (DVD or Blu-ray disc) and entering
the path in the box provided, or by selecting Browse to explore the system. The path you include depends
on the platform:
○ (Linux) – <Kernel_PATH>/DATA_UNITS/K_745_U_LINUX_X86_64
○ (AIX) – <Kernel_PATH>/DATA_UNITS/K_745_U_AIX_PPC64
○ (HPUX) – <Kernel_PATH>/DATA_UNITS/K_745_U_HPUX_IA64
○ (Solaris) – <Kernel_PATH>/DATA_UNITS/K_745_U_SOLARIS_SPARC
○ (Windows) – <Kernel_PATH>\DATA_UNITS\K_745_U_WINDOWS_X86_64
Click Next.
○ Physical Memory – the amount of memory that SAP ASE is using at any time.
○ Number of Cores – the number of processors in your system.
○ Number of Database Connections – the maximum number of connections to SAP ASE.
Click Next.
12. Enable ABAP table declustering by selecting the button for Enable declustering/depooling of all ABAP
tables. Click Next.
13. Select the No SLD destination option under the Register in System Landscape Directory heading. Click
Next.
14. Determine the secure storage key by selecting Default Key under the Secure Storage Individual Key
Information heading. Click Next.
15. The installer provides a summary of the configuration. Change any values that are incorrect:
○ Parameter Settings > Parameter Mode – Typical for Parameter Mode
○ General SAP System Parameters – One of:
○ SAP System ID (SAPSID) – the value for the global SAP SID, which you supplied earlier in this task.
○ SAP Mount Directory – the path to directory
○ Master Password – the master password, to be used users to log in to the system.
○ Software Package Browser – One of:
○ Media Location – the directory on which the installation media is mounted
○ Package Location – the directory in which the software package is located
16. Click Next to start the package installation.
Procedure
#
# This responses file installs "SAP ASE Data Movement for HADR" feature for
Business Suite
#
RUN_SILENT=true
AGREE_TO_SYBASE_LICENSE=true
AGREE_TO_SAP_LICENSE=true
PRODUCTION_INSTALL=TRUE
INSTALL_SETUP_HADR_SAMPLE=true
# Windows only
DO_NOT_CREATE_SHORTCUT=true
REGISTER_UNINSTALLER_WINDOWS=false
INSTALL_USER_PROFILE=USER
3. In the line defining the USER_INSTALL_DIR, edit the value of <ASE_installed_directory> to point to
your SAP ASE installation directory. For example:
USER_INSTALL_DIR=/sybase/<SID>
Note
On Windows, use the double back slash (\\) to split paths. For example, enter "E:\sybase\<SID>" as
"E:\\sybase\\<SID>".
4. Run the SAP ASE installer in silent mode to install the Data Movement component, where
<response_file> is the absolute path of the file name you just created:
○ (UNIX) – execute setup.bin. Use this syntax:
Procedure
○ (Windows) Use the double back slash (\\) to split paths. For example, enter "E:\sybase" as "E:\
\sybase".
○ You do not need the <SID> database as it is automatically created by SWPM (sapinst).
○ Set these properties on the primary site:
setup_site=<primary_site>
is_secondary_site_setup=false
See Sample setup_hadr.rs Response File for Business Suite [page 187] for examples of the changes
required.
3. Run setuphadr with the response file:
E:\>E:\sybase\NW7\ASE-16_0\bin\setuphadr setup_SFHADR.rs
Setup user databases
Set "NW7" database "trunc log on chkpt" option to "false"...
Setup user databases...Success
Setup ASE HADR maintenance user
Create maintenance login "NW7_maint"...
Grant "sa_role" role to "NW7_maint"...
Grant "replication_role" role to "NW7_maint"...
Grant "replication_maint_role_gp" role to "NW7_maint"...
Create "sap_maint_user_role" role...
Grant set session authorization to "sap_maint_user_role"...
Grant "sap_maint_user_role" role to "NW7_maint"...
Add auto activated roles "sap_maint_user_role" to user "NW7_maint"...
Allow "NW7_maint" to be known as dbo in "master" database...
Allow "NW7_maint" to be known as dbo in "NW7" database...
Setup ASE HADR maintenance user...Success
Setup administrator user
Create administrator login "DR_admin"...
Grant "sa_role" role to "DR_admin"...
Grant "sso_role" role to "DR_admin"...
Grant "replication_role" role to "DR_admin"...
Configuring the HADR on the companion includes installing NetWeaver, the Data Management software, and
running setuphadr utility.
Install the Business Suite application using the application installation process.
Context
The installation process varies depending on which installation application you use. The steps to install the
Business Suite application depends on which application you use. The following example describes the
NetWeaver installation process.
Procedure
1. Move to the sapinst directory, which was created when the SAPCAR.exe utility extracted files.
2. Execute the sapinst utility to start the SAP installation GUI
3. Select SAP NetWeaver 7.5 SAP ASE Database Replication Setup of Replication Environment and
click Next.
4. Specify the Replication Server parameters, then click Next:
○ SAP System ID – comprises three alphanumeric characters and is the same as the SAP SID you
entered for the primary
○ Master Password – is the same as the master password you entered for the primary
○ SAP Global Host Name – is the host name of the machine on which you are installing the software
○ Set up a secondary database instance – select to confirm.
○ Install the replication server software – leave blank
○ Configure the replication system – leave blank
Procedure
#
# This responses file installs "SAP ASE Data Movement for HADR" feature for
Business Suite
#
RUN_SILENT=true
AGREE_TO_SYBASE_LICENSE=true
AGREE_TO_SAP_LICENSE=true
PRODUCTION_INSTALL=TRUE
INSTALL_SETUP_HADR_SAMPLE=true
# Windows only
DO_NOT_CREATE_SHORTCUT=true
REGISTER_UNINSTALLER_WINDOWS=false
INSTALL_USER_PROFILE=USER
DO_NOT_CREATE_RMA_WINDOW_SERVICE=true
#chadr
INSTALL_SCC_SERVICE=false
USER_INSTALL_DIR=<ASE_installed_directory>
# Install HADR ("SAP ASE Data Movement for HADR" feature)
DO_UPDATE_INSTALL=false
CHOSEN_INSTALL_SET=Custom
CHOSEN_FEATURE_LIST=fase_hadr
CHOSEN_INSTALL_FEATURE_LIST=fase_hadr
INSTALL_SAP_HOST_AGENT=FALSE
# License
SYBASE_PRODUCT_LICENSE_TYPE=license
SYSAM_LICENSE_SOURCE=proceed_without_license
SYSAM_PRODUCT_EDITION=Enterprise Edition
SYSAM_LICENSE_TYPE=AC : OEM Application Deployment CPU License
SYSAM_NOTIFICATION_ENABLE=false
# Do not configure new servers
SY_CONFIG_ASE_SERVER=false
SY_CONFIG_HADR_SERVER=false
SY_CONFIG_BS_SERVER=false
SY_CONFIG_XP_SERVER=false
3. In the line defining the USER_INSTALL_DIR, edit the value of <ASE_installed_directory> to point to
your SAP ASE installation directory. For example:
USER_INSTALL_DIR=/sybase/<SID>
Note
On Windows, use the double back slash (\\) to split paths. For example, enter "E:\sybase\<SID>" as
"E:\\sybase\\<SID>".
4. Run the installer in silent mode to install the Data Movement component, where <response_file> is the
absolute path of the file name you just created:
Procedure
setup_site=COMP
is_secondary_site_setup=true
See Sample setup_hadr.rs Response File for Business Suite [page 187] for an example of the necessary
changes.
4. As syb<SID>, run setuphadr with the response file:
○ (UNIX) – $SYBASE/$SYBASE_ASE/bin/setuphadr <path_to_response_file>
○ (Windows) – %SYBASE%\%SYBASE_ASE%\bin\setuphadr.bat <path_to_response_file>
./ASE-16_0/bin/setuphadr setup_SJHADR.rs
Setup user databases
Set "NW7" database "trunc log on chkpt" option to "false"...
Setup user databases...Success
Setup Backup server allow hosts
Backup server on "COMP" site: Add host "Huge_Machine1.corp" to allow
dump and load...
Backup server on "PRIM" site: Add host "Huge_Machine2.corp" to allow
dump and load...
Setup Backup server allow hosts...Success
Setup RMA
Set SAP ID to "NW7"...
Set installation mode to "BS"...
Set site name "SFHADR1" with SAP ASE host:port to "Huge_Machine1.corp:
4901" and Replication Server host:port to "Huge_Machine1.corp:4905"...
Set site name "SJHADR2" with SAP ASE host:port to "Huge_Machine2.corp:
4901" and Replication Server host:port to "Huge_Machine2.corp:4905"...
Set site name "SFHADR1" with Backup server port to "4902"...
Set site name "SJHADR2" with Backup server port to "4902"...
Set site name "SFHADR1" databases dump directory to "/sybase/NW7/
data"...
Set site name "SJHADR2" databases dump directory to "/sybase/NW7/
data"...
Set site name "SFHADR1" synchronization mode to "sync"...
Set site name "SJHADR2" synchronization mode to "sync"...
Set site name "SFHADR1" distribution mode to "remote"...
Set site name "SJHADR2" distribution mode to "remote"...
Set site name "SFHADR1" distribution target to site name "SJHADR2"...
Set site name "SJHADR2" distribution target to site name "SFHADR1"...
Set maintenance user to "NW7_maint"...
Set site name "SFHADR1" device buffer directory to "/sybase/NW7/
data"...
Set site name "SJHADR2" device buffer directory to "/sybase/NW7/
data"...
Set site name "SFHADR1" device buffer size to "512"...
Set site name "SJHADR2" device buffer size to "512"...
Set site name "SFHADR1" simple persistent queue directory to "/
sybase/NW7/data"...
Set site name "SJHADR2" simple persistent queue directory to "/
sybase/NW7/data"...
Set site name "SFHADR1" simple persistent queue size to "2000"...
Set site name "SJHADR2" simple persistent queue size to "2000"...
Set master, NW7 databases to participate in replication...
Setup RMA...Success
Setup Replication
Setup replication from "SFHADR1" to "SJHADR2"...
Configuring remote replication server..........................
There are a number of tasks you must perform on the primary and companion servers after installation.
On a Windows system, after the installation is complete on the primary and companion hosts, Replication
Server is running, but not as a service to RMA.
Context
Procedure
Use the rsecssfx put command to add the DR_admin entry to SecureStore.
Procedure
Perform these steps on the primary and companion servers to configure Replication Server to specify the
maximum number of the CPUs and the maximum size of the memory for the Replication Server instance.
Procedure
For example, this tunes Replication Server on logical host SFHADR1 with 4 GB memory and 2 CPUs:
sap_tune_rs SJHADR2,4,2
If you would like to have your SAP Application Server automatically failover to the standby SAP ASE when you
perform a database level failover, add the dbs_syb_ha and the dbs_syb_server user.
● On Windows:
1. Log into the SAP Primary Application Server (PAS) host as the <SID>adm user.
2. From the System Properties window, click the Advanced table and select Environment Variables.
3. Select New.
4. Enter the following, then click OK:
○ Variable Name – dbs_syb_ha
○ Variable value – 1
5. Select the dbs_syb_server user variable, and click Edit to enter the following values:
○ Variable Name – dbs_syb_server
○ Variable value – <host_name><standby server>
6. Click OK.
7. Click OK.
8. Restart NetWeaver.
9. Log into the SAP Management Console (sapmmc).
10. Right-click on Console Root SAP Systems NW7 .
11. Select Restart.
12. Make the same change on all additional SAP Application Servers.
● On Linux
setenv dbs_syb_ha 1
setenv dbs_syb_server <primary_server_name>:<standby_server_name>
2. Restart NetWeaver by issuing these commands on the primary server as the <SID>adm user:
This is a sample setup_hadr.rs file. The text changed for the installation described in this guide is in bold.
###############################################################################
# Setup HADR sample responses file
#
# This sample responses file setup ASE HADR on
# hosts "host1" (primary) and "host2" (companion).
#
# Prerequisite :
# - New SAP ASE and Backup servers setup and started on "host1" and "host2".
# See HADR User Guide for requirements on SAP ASE servers.
# - Replication Management Agent (RMA) started on "host1" and "host2".
#
# Usage :
# 1. On host1 (primary), run:
# $SYBASE/$SYBASE_ASE\bin\setuphadr <this_responses_file>
#
# 2. Change this responses file properties:
# setup_site=COMP
# is_secondary_site_setup=true
#
# 3. On host2 (companion), run
# $SYBASE/$SYBASE_ASE\bin\setuphadr <responses_file_from_step_2>
#
###############################################################################
# ID that identifies this cluster
#
# Value must be unique,
# begin with a letter and
# 3 characters in length.
# Note: Set value to your SID incase of HADR on SAP Business Suite Installations
cluster_id=NW7
# Which site being configured
#
# Note:
# You need to set "<setup_site_value>.*"
# properties in this responses file.
setup_site=PRIM
# Set installation_mode
#
# Valid values: true, false
#
# If set to true, installation_mode will be set to "BS".
# If set to false, installation_mode will be set to "nonBS"
# Note: Set value to true for HADR on SAP Business Suite installations
setup_bs=true
# Note: Set enable_ssl to false for HADR on SAP Business Suite Installations
#
The Fault Manager monitors the health of the primary and standby servers, and triggers a failover if the primary
server or host fails, and the HADR system is running in synchronous mode.
The Fault Manager is a standalone component that runs on a third node, preferably where the application
server in running, and on the same platform as the HADR system nodes.
The Fault Manager functions in two modes: the Fault Manager mode and as the heartbeat client mode. The
Fault Manager runs on a third host. In Fault Manager mode, it monitors SAP ASE, Replication Server, performs
functions like initiating failover, and restarting the server, and acts as the server for the heartbeats that it
receives from the heartbeat clients.
The Fault Manager heartbeat client mode runs on primary and standby hosts. In heartbeat client mode, the
Fault Manager sends a heartbeat to the Fault Manager, checks for heartbeats from fellow heartbeat clients, and
sends its own heartbeat to them (primarily to avoid a split-brain situation). If the heartbeat client on the
primary host loses a connection with the Fault Manager and the fellow heartbeat client, the Fault Manager
triggers a deactivation of the primary server. If the deactivation fails, the Fault Manager kills the SAP ASE
process.
The Fault Manager checks the database state with the saphostctrl SAP host agent, which is a daemon
process started on all participating nodes. The Fault Manager also uses saphostctrl to connect to the
Replication Management Agent. See the chapter titled The SAP Host Agent.
● Triggers a failover using saphostctrl if the primary server is down or if the primary node is down or
unreachable, and the standby server is healthy and synchronously replicated.
● Restarts the primary server if it is down and replication is asynchronous.
Note
Stop or hibernate the Fault Manager when you perform any maintenance activity on SAP ASE or other
components in the HADR system. Once hibernated, the Fault Manager process continues to run but will not
monitor the database, and no failover occurs. The heartbeat processes are stopped during hibernation.
Primary server is unreachable (network glitch or SAP ASE Retry shallow probe, try deep probe, probe companion, fail
unresponsive). over to companion, notify cockpit. Attempt restarting pri
mary SAP ASE if possible when HA is off.
Primary server reports an error condition. If client login and data access are unaffected, no action is
taken. Fault Manager does not scan SAP ASE log for errors. If
the error results in login failures or data access errors, fail
over to companion if ha/syb/
failover_if_unresponsive=1 is included in the
profile file.
HA services on the companion are down, resulting in replica Attempt to restart HA services (may need manual interven
tion from the primary server to the companion server. tion).
HA Services on companion are restored. After syncing up the backlog of transaction logs, automati
cally switch to sync mode replication, turn on HA.
Fault Manager components on primary are down. Attempt to restart failed components if ha/syb/
chk_restart_repserver=1 is included in the
profile file, notify cockpit of success or failure of restart
and HA on/off status (may need manual intervention).
Fault Manager is down. In this version of the software, manually restart the Fault
Manager (see Administering the Fault Manager [page 145]).
Failover fails. Attempt to failover again until failover succeeds, or the con
dition causing failover is rectified (may need manual inter
vention).
Fault Manager components on companion are down. Attempt to restart failed components.
Fault Manager is unreachable from the 2 sites (primary and If the network between the primary and companion is OK,
companion). then continue as is. However, if there is a network problem
between primary and companion, deactivate the primary to
avoid split brain, notify cockpit. HA is off and application has
no access to database (will need manual intervention).
Primary and companion are unreachable from Fault Man No action performed and replication continues as normal.
ager.
Heartbeat from primary is missed for a preconfigured time If SAP ASE is not reachable (confirmed by local agent), then
out. failover. If SAP ASE is reachable and HA is not working, re
start HA.
Heartbeat from companion is missed for a preconfigured If SAP ASE is unreachable (confirmed by local agent), then
timeout. failover. If SAP ASE is reachable and HA is not working, re
start HA.
Companion SAP ASE is down. Attempt to restart companion SAP ASE if ha/syb/
allow_restart_companion=1 is included in the
profile file.
Companion SAP ASE is unreachable (network glitch or SAP Attempt to restart companion SAP ASE when possible.
ASE unresponsive).
HA services on primary down, no impact to HA until failover. Attempt to restart companion SAP ASE when possible.
The Fault Manager requires username and password combinations to connect to SAP ASE, RMA, and SAP Host
Agent. These usernames and passwords are stored in an encrypted format in the SecureStore.
During configuration, the Fault Manager adds usernames and passwords for the following users in the
SecureStore:
● SADB_USER – SAP ASE user with the sa_role and replication_role roles.
● DR_USER – RMA user, used for connecting to RMA.
● SAPADM_USER – Operating system user, mostly used for sapadm for SAP HostAgent.
Use the rsecssfx utility to perform this administration duty for SecureStore. Update any changed usernames
and passwords in SecureStore. To do so, stop the Fault Manager, update the SecureStore using the rsecssfx
utility, and restart the Fault Manager. Stop the Fault Manager while the password is changed in the cluster
components.
● Use the SAP Host Agent to add or update entries in the SecureStore. The syntax is:
./FaultManager/bin/rsecssfx list
|------------------------------------------------------------------------|
Note
Fault Manager is installed by default as part of Kernel utilities during SAP Netweaver Installation in the following
location /usr/sap/<SID>/SYS/exe/run. For details refer to the SAP Note 1959660 .
Use the sybdbfm utility to view the status of the Fault Manager. For example:
$ sybdbfm status
fault manager running, pid = 17763, fault manager overall status = OK, currently
executing in mode PAUSING
*** sanity check report (1)***.
node 1: server star1, site hasite0.
db host status: OK.
db status OK hadr status PRIMARY.
node 2: server star2, site hasite1.
db host status: OK.
db status OK hadr status STANDBY.
replication status: SYNC_OK.
Edit the Fault Manager profile file to change any parameter. The profile file is named SYBHA.PFL, and is located
in the install directory of the Fault Manager on all platforms. Restart the Fault Manager for the profile parameter
changes to take effect.
You should continuously monitor the Fault Manager log (named dev_sybdbfm, and located in /usr/sap/
<SID>/ASCS00/work).
Note
If a problem related to Fault Manager or the heartbeat requires you to consult SAP, back up the following
data when the problem occurs:
How you uninstall the Fault Manager depends on whether you installed it using the SAP installer or the
sybdbfm utility.
sybdbfm stop
2. Remove SecureStore-related files by issuing this from the directory that contains SYBHA.PFL:
2. (If you installed Fault Manager on a completely separate host that is not the SAP application server or the
database host) Move to $SYBASE/sybuninstall/FaultManager and issue:
./uninstall
3. (If you installed Fault Manager on a completely separate host that is not the SAP application server or the
database host) In the Uninstall Options screen, select the appropriate option.
The HADR feature allows SAP ASE applications to operate with zero down time while you are updating the SAP
ASE software.
Complete the upgrade steps in a single sequence: partial upgrade is not supported (for example, you cannot
upgrade some components now and then upgrade the other components at another time). Replication is
suspended during some steps of a rolling upgrade, and if you perform a partial upgrade, logs continue to grow,
which can result in logs or the SPQ running out of space. During a rolling upgrade, the versions between SAP
ASE and Replication Server need not match.
The RUN_rs instance name.sh Replication Server runserver file is regenerated during an upgrade, and any
user changes to this file are lost. If your site requires these changes, edit the runserver file after the upgrade is
complete then restart Replication Server to make the environment settings take effect.
Note
Before upgrading HADR with SAP Business Suite on SAP ASE, you may have to follow instructions from
your application vendors over the general guidelines in this chapter. See SAP note 2808173 for more
details.
In this topology, the primary server (ASE1) is installed on the same host as the inactive Replication Server
(SRS1). The active Replication Server (SRS2) is installed on a remote host, along with the standby server
(ASE2). Data changes that occur in ASE1 are sent by the Replication Agent thread to the active SRS2 running
on the remote host. The active SRS2 then routes these changes to ASE2, which is running on the same host as
the active Replication Server, SRS2. In this setup, the inactive Replication Server, SRS1, is not involved in data
movement until failover occurs. The communication among ASE1, SRS1, and ASE2 is through a client interface
(stream replication, indicated in this topology as "CI").
Run this command to determine which SAP ASE you are connected to in the HADR system:
select asehostname()
In this configuration, all components are running, and the standby server is almost in sync with the primary
server. Prior to upgrade, site1 is the primary server and site2 is the companion server (in high-availability – HA
– mode, the companion server is referred to as the standby server) with remote replication topology. The
Replication Server versions prior the upgrade are compatible with the Replication Server versions after the
upgrade. If you upgrade from a "1-Off" release, you can upgrade only the SAP ASE or Replication Servers.
Note
Stop the Fault Manager before you perform a rolling upgrade (even if you are performing planned activities
like a planned failover). You can start the Fault Manager after the upgrade is complete. To stop the Fault
Manager, issue this from the <installation_directory>/FaultManager directory:
<Fault_Manager_install_dir>/FaultManager/bin/sybdbfm stop
To perform a rolling upgrade, you first upgrade SRS1 on site1 to a higher version:
shutdown
4. Remove the RMA service: On Windows, execute the following command from either the %SYBASE%
\RMA-16_0\compatibility\WinService\Win32\Release directory, or the %SYBASE%
\RMA-16_0\compatibility\WinService\x64\Release directory, to remove the RMA service –
install_directory/setup.bin
6. In the Choose Install Folder screen, enter the current SAP ASE SAP installation directory, then click Next:
7. In the Choose Update Installation screen, determine if you want the installer to select and apply updates,
then select Update only the Data Movement component in rolling upgrade.
The SAP installer must complete the software update before you continue to the next step.
8. Install a new RMA service on Windows: To install, and then start the new RMA service on Windows, execute
the following command from either the %SYBASE%\RMA-16_0\compatibility\WinService
\Win32\Release directory, or the %SYBASE%\RMA-16_0\compatibility\WinService
\x64\Release directory –
10. Log in to RMA on site1 as the DR_admin user and issue sap_upgrade_server to finish the upgrade for
Replication Server on site1:
At this point of the upgrade process, the HADR system is working normally with ASE1, SRS2, ASE2 at
the older versions, and SRS1 at newer version.
11. Log into RMA on site1 as the DR_admin user and issue:
This command allows a 30-second grace period for any running transactions to complete before the
deactivation starts. Failover will not succeed if there are still active transactions after 30 seconds. If this
occurs, retry the command when the system is not busy, use a longer grace period, or use the force
option to terminate the client connection (if it is safe) with:
12. The sap_failover command may take a long time to finish. To check the status of the sap_failover
command, issue this from the RMA:
sap_status task
13. Once the sap_status command returns Completed, resume replication by issuing this from the RMA:
sap_host_available <site1_site_name>
14. Verify that Replication Server is not running any isql processes during the Replication Server installation
step below. If there are isql processes running, Replication Server issues an error message stating "isql
text file busy".
15. Login to RMA on site2 as the DR_admin user and issue sap_upgrade_server to start the upgrade for
Replication Server on site2:
shutdown
17. Remove the RMA service: On Windows, execute the following command from either the %SYBASE%
\RMA-16_0\compatibility\WinService\Win32\Release directory, or the %SYBASE%
\RMA-16_0\compatibility\WinService\x64\Release directory, to remove the RMA service –
<install_directory>/setup.bin
19. In the Choose Install Folder screen, enter the current ASE SAP installation directory.
20.In the Choose Update Installation screen, determine if you want the installer to select and apply updates,
then select Update only the Data Movement component in rolling upgrade.
The SAP installer must complete the software update before you continue to the next step.
21. Install a new RMA service on Windows: To install, and then start the new RMA service on Windows, execute
the following command from either the %SYBASE%\RMA-16_0\compatibility\WinService
\Win32\Release directory, or the %SYBASE%\RMA-16_0\compatibility\WinService
\x64\Release directory –
22. After the SAP installer has finished the upgrade, start RMA:
○ (UNIX) – $SYBASE/$SYBASE_ASE/bin/rma
○ (Windows) – start the RMA Windows service by either of the following:
○ Starting Sybase DR Agent - <cluster_ID> from the Services panel
○ Issuing this command, where <cluster_ID> is the ID of the cluster:
23. Log into RMA on site2 as the DR_admin user and issue sap_upgrade_server to finish the upgrade for
Replication Server on site2:
At this point of the upgrade process, the HADR system is working normally with ASE1 and ASE2 at the
older versions, and SRS1 and SRS2 at the newer version.
24. (Skip this step if you do not lock the sa user) If the sa user is locked, temporarily unlock this user on ASE1
during the upgrade process by logging in as the user with SSO permission on ASE1 and issuing:
25. Log into RMA on site1 as the DR_admin user and issue sap_upgrade_server to start the upgrade for
SAP ASE on site1:
shutdown SYB_BACKUP
go
shutdown
go
27. Shut down SAP ASE Cockpit. If the SAP ASE Cockpit is running:
○ In the foreground – At the cockpit> prompt, execute:
shutdown
Note
Do not enter shutdown at a UNIX prompt: Doing so shuts down the operating system.
$SYBASE/COCKPIT-4/bin/cockpit.sh --stop
<install_directory>/setup.bin
29. In the Choose Install Folder screen, enter the current ASE SAP installation directory.
Note
The SAP installer must complete the software update before you continue to the next step.
31. After the SAP installer has completed the upgrade, use the updatease utility to upgrade SAP ASE, which
runs installmaster and performs other tasks to bring SAP ASE up to date. updatease is available as a
GUI display or a command line tool.
Note
You need not perform this step if you update the SAP ASE server instance in the SAP installer.
./updatease
./updatease
Server: SFSAP1
ASE Password:
Updating SAP Adaptive Server Enterprise 'SFSAP1'...
Running installmaster script...
installmaster: 10% complete.
installmaster: 20% complete.
installmaster: 30% complete.
$SYBASE/ASE-16_0/install/RUN_<backupserver_name>
$SYBASE/COCKPIT-4/bin/cockpit.sh
○ In the background:
From the Bourne shell (sh) or Bash, issue:
34. Log into RMA on site1 as the DR_admin user and issue sap_upgrade_server to complete the upgrade
for SAP ASE on site1:
35. (Skip this step if you do not lock the sa login) Log in to ASE1 as a user with sapsso permission and issue
this once the upgrade is complete:
At this point of the upgrade process, the HADR system is working normally, with ASE2 at the older
versions and SRS1, SRS2, and ASE1 at the newer versions. The topology looks like:
36. Log into RMA on site2 as the DR_admin user and issue:
This command allows a 30-second grace period for any running transactions to complete before the
deactivation starts. Failover will not succeed if there are still active transactions after 30 seconds. If this
occurs, retry the command when the system is not busy, use a longer grace period, or use the force
option to terminate the client connection (if it is safe) with:
37. The sap_failover command may take a long time to finish. To check the status of the sap_failover
command, issue this from the RMA:
sap_status task
38. Once the sap_status command returns Completed, resume replication from ASE1 to SRS2, but not to
ASE2, by issuing this from the RMA (suspend ensures that no ticket is sent to verify the replication status
during the upgrade process):
The replication path from ASE1 to SRS2 is restored and all commits on ASE1 are synchronously committed
on SRS2, meaning there is zero data loss if site1 is lost during this time. Because replication to ASE2 is not
restored, the log records in SRS2 are not applied to ASE2. Make sure you have sufficient storage
configured in the SPQ for the SRS2 for the short period of time until the upgrade process completes.
39. (Skip this step if you do not lock the sa login) Temporarily unlock the sa user on ASE2 for upgrade. Login as
the user with sapsso permission on ASE2 and issue:
40.Log into RMA on site2 as the DR_admin user and issue sap_upgrade_server to start the upgrade for
SAP ASE on site2 (suspend ensures that no ticket is sent to verify the replication status during the
upgrade process):
shutdown SYB_BACKUP
go
shutdown
go
shutdown
Note
Do not enter shutdown at a UNIX prompt; it shuts down the operating system.
$SYBASE/COCKPIT-4/bin/cockpit.sh --stop
install_directory/setup.bin
44.Enter the current ASE SAP installation directory in the Choose Install Folder screen.
45. In the Choose Update Installation screen, determine if you want the installer to select and apply updates,
then select Update only the SAP ASE component in rolling upgrade:
The SAP installer must complete the software update before you continue to the next step.
46. After the SAP installer has completed the upgrade, use the updatease utility to upgrade SAP ASE, which
runs installmaster and performs other tasks to bring SAP ASE up to date. See the updatease
instructions in the earlier step.
47. Start Backup Server from the command line by issuing:
$SYBASE/ASE-16_0/install/RUN_<backupserver_name>
$SYBASE/COCKPIT-4/bin/cockpit.sh
○ In the background:
From the Bourne shell (sh) or Bash, issue:
49. Log into RMA on site2 as the DR_admin user and issue sap_upgrade_server to complete the upgrade
for SAP ASE on site2:
50.(Skip this step if you do not lock the sa login) Log in to ASE2 as the user with sapsso permissions and issue
this once the upgrade is complete:
Note
At this point of the upgrade process, the HADR system is working normally, with SRS1, SRS2, ASE1,
and ASE2 at the newer versions. The topology looks like:
<installation_directory>/FaultManager/setup.bin
52. Choose the existing ASE-installed directory on the Choose Install Folder screen. Do not choose to configure
Fault Manager in the installer.
53. Set the environment variables, and source SYBASE.sh for the Bourne shell:
source <installation_directory>/SYBASE.csh
<Fault_Manager_install_dir>/FaultManager/sybdbfm_<CID>
These sections describe how to upgrade an SAP ASE version 15.7 disaster recovery (DR) solution to ASE 16.0
HADR solution in a Business Suite or a Custom Application environment.
● These steps require an application downtime, the length of which depends on the size of the databases you
are upgrading.
● If you do not follow the upgrade steps precisely as documented and in the proper sequence, you will lose
data.
The SAPHOSTAGENT<SP-version>.SAR archive contains all of the required elements for centrally monitoring
any host. It is available for all operating system platforms supported by SAP
Context
The SAP Host Agent is automatically installed during the installation of SAP systems or instances with SAP
kernel 7.20 or higher.
Procedure
3. Choose Installation and Upgrades By Alphabetical Index (A-Z) H SAP Host Agent SAP Host
Agent 7.21 > Select highest available version.
4. Select the appropriate SAPHOSTAGENT<SP-version>.SAR archive from the Download tab.
Recommendation
Always select the highest SP version of the SAPHOSTAGENT<SP-version>.SAR archive, even if you
want to monitor a component of SAP NetWeaver with a lower release.
5. Make sure that the SAPCAR tool is available on the host where you want to install SAP Host Agent.
Windows c:\temp\hostagent
UNIX /tmp/hostagent
Recommendation
You can use the additional parameter -verify to verify the content of the installation package
against the SAP digital signature
Recommendation
You can use the additional parameter -verify to verify the content of the installation package
against the SAP digital signature
10. After the upgrade has finished successfully, you can check the version of the upgraded host agent by
executing the following command from the directory of the SAP Host Agent executables:
UNIX ○ If you are logged on as a user with root authorization, the command is as follows: /usr/sap/
hostctrl/exe/saphostexec -version
○ If you are logged on as a member of the sapsys group, for example <sapsid>adm, the command is
as follows: /usr/sap/hostctrl/exe/hostexecstart -version
Procedure
stopsap r3
4. Check the ticket table on the standby site. Log in with isql and issue:
use master
go
select count(*) from rs_ticket_history
go
––––––––––
3
Verify this is the same ticket history in the replicated database (in this example, TIA):
use TIA
go
select count(*) from rs_ticket_history
go
––––––––-
3
5. Log in to the RMA, and check the SAP ASE transaction backlog and the Replication Server queue backlog
using sap_send_trace RMA command (this example runs the command against the PRI database):
sap_send_trace PRI
TASKNAME TYPE VALUE
---------------------- ----------------------------
-----------------------------------
Send Trace Start Time Mon Nov 16 04:42:02 EST 2015
Send Trace Elapsed Time
00:00:00
Send Trace Task Name Send Trace
Send Trace Task State Completed
Send Trace Short Description Send a trace through the
Replication system using rs_ticket
Send Trace Long Description Successfully sent
traces on participating databases.
Send Trace Host Name Big_Host
(7 rows affected)
1> sap_status resource
2> go
Name Type Value
---------------------- ----------------------------
-----------------------------------
Start Time 2016-08-24
15:37:55.327
Elapsed Time 00:00:00
Estimated Failover Time 0
PRI Replication device size (MB) 15360
PRI Replication device usage 112
COM Replication device size (MB) 15360
COM Replication device usage 128
PRI.master ASE transaction log (MB) 300
PRI.master ASE transaction log backlog (MB) 0
PRI.master Replication queue backlog (MB) 0
COM.master Replication queue backlog (MB) 0
PRI.TIA ASE transaction log (MB) 10240
PRI.TIA ASE transaction log backlog (MB) 0
PRI.TIA Replication queue backlog (MB) 0
COM.TIA Replication queue backlog (MB) 0
(15 rows affected)
use master
go
select count(*) from rs_ticket_history
go
––––––––––
4
Verify this is the same ticket history in the replicated database (in this example, TIA):
use TIA
go
select count(*) from rs_ticket_history
go
––––––––-
4
7. Uninstall the primary or the standby Replication Server (RMA internally tears down the entire HADR
system, including the primary and standby Replication Servers and the users and roles, once you issue
sap_teardown). Log into either RMA and issue:
sap_teardown
8. Upgrade SAP ASE to version 16.0. You can upgrade the primary and the standby servers at the same time.
Issue this from the command line:
For example:
9. Install the Data Movement option on the primary and standby servers using the silent install method.
a. Prepare a response file according to the instructions in Installing the HADR System with Response
Files, Console, and Silent Mode [page 66], indicating you are installing only the "SAP ASE Data
Movement for HADR."
b. Log on to the host as user syb<sid>.
c. Execute the response file according to these instructions Installing the HADR System in Silent Mode
[page 78].
10. Unlock the sa user on the primary and standby SAP ASE servers. Perform these steps on both
companions:
a. Log in to the primary SAP ASE database as user sapsso.
b. Issue:
11. Unlock user sa on the primary and standby SAP ASE servers. Perform these steps on both companions:
a. Log in to the primary SAP ASE database as user sapsso.
12. Configure SAP ASE for the HADR environment. Follow the instructions in Installing HADR with an Existing
System [page 85] for configuring the primary and standby servers.
13. Lock user sa on the primary and standby SAP ASE servers. Perform these steps on both companions:
a. Log in to the primary SAP ASE database as user sapsso.
b. Issue:
14. Use the sap_status command to check the Replication Server status after the upgrade. Issue this at the
RMA isql prompt:
sap_status path
You should see this line for the Replication Server Status in the output:
For example:
sap_tune_rs Big_Host, 8, 2
sap_tune_rs Other_Big_Host, 8, 2
startsap r3
An HADR system allows you to maintain confidentiality of data by encrypting client-server communications
using Secure Sockets Layer (SSL) session-based security.
Note
SSL is the standard for securing the transmission of sensitive information over the Internet, including credit
card numbers, stock trades, and banking transactions.
SSL uses certificates issued by certificate authorities (CAs) to establish and verify identities. A certificate is like
an electronic passport; it contains all the information necessary to identify an entity, including the public key of
the certified entity and the signature of the issuing CA.
You can enable SSL security for the following HADR scenarios:
● Inside an HADR system. See Enabling SSL for the HADR System [page 223].
● For an external replication system. See Configuring SSL for External Replication [page 224].
● For the Fault Manager. See Configuring the Fault Manager in an SSL-Enabled HADR Environment [page
226].
Implementing HADR SSL features requires a knowledgeable system security officer familiar with the security
policies and needs of your site, and who has a general understanding of SSL and public-key cryptography.
To enable SSL functionality when you set up a new HADR system, use the setuphadr utility.
Configure the following parameters in the setup_hadr.rs file to enable SSL functionality. Then, run the
setuphadr utility to configure the interface files, and enable SSL automatically.
# SSL common name - This is the name of your SAP ASE server. It must be the
common name you use to generate your server
# certificates.
ssl_common_name=YOUR_HADR_SERVERNAME
# Name and location of the Root CA certificate. If you are using a self-signed
certificate, put your
# public key file here.
ssl_ca_cert_file=/tmp/rootCA.pem
# SSL password to protect your private key - this is the same password you used
while creating your certificates
# and private keys.
ssl_password=password
Note
The setuphadr utility uses the following default values for the backup server credential if you don't specify
your own values in the response file:
bs_admin_user=sa
bs_admin_password=
For details on key generation, see Enabling SSL in the SAP ASE Security Administration Guide.
Note
Related Information
The process required to enable SSL for an external replication system differs from the process required to
enable SSL for an HADR system. In an external replication system, manual configurations are required to
enable SSL.
You can enable SSL security to replicate data from an external SAP Replication Server to an HADR system.
Prerequisites
SSL is enabled in your HADR system. See Enabling SSL for the HADR System [page 223].
Procedure
1. Ensure that $SYBASE/config/trusted.txt contains the CA certificates and public keys needed to
access the SSL-enabled HADR system.
2. In the interfaces file, add an SSL entry to the external Replication Server.
Example:
SAMPLE_RS
master tcp ether localhost 11752 ssl="CN=SAMPLE_RS.sap.com"
query tcp ether localhost 11752 ssl="CN=SAMPLE_RS.sap.com"
You can enable SSL security to replicate data from an HADR system to an external SAP Replication Server.
Prerequisites
SSL is enabled in your HADR system. See Enabling SSL for the HADR System [page 223].
1. Set up and enable SSL services on the SAP Replication Server. See Setting Up SSL Security on Replication
Server and Enable SSL Security on Replication Server in the SAP Replication Server Administration Guide:
Volume 1.
2. Establish a connection between the HADR system and the external SAP Replication Server:
Configuring the Fault Manager in an SSL-Enabled HADR environment requires tasks that include creating and
configuring personal security environments for the client and server.
Note
The following table lists the shared library recommendations to extend SSL support in the SAP Host Agent and
the heartbeat client. Shared libraries for all platforms are provided in the Fault Manager installer.
Windows ● sapcrypto.dll
● slcryptokernel.dll
● slcryptokernel.dll.sha256
Linux/Unix ● libslcryptokernel.so
● libsapcrypto.so
● libslcryptokernel.so.sha256
Perform this task to create a personal security environment (PSE), SAPSSLS.pse, for the SAP Host Agent on
both database hosts.
Procedure
○ (Linux)
○ (Windows)
2. (Linux) Assign the ownership for the sec directory to the sapadm:sapsys user:
3. Use the following commands to set up the shared library search path (LD_LIBRARY_PATH, LIBPATH, or
SHLIB_PATH) and SECUDIR environment variables, and change to the exe directory of SAP Host Agent:
○ (Linux)
export LD_LIBRARY_PATH=/usr/sap/hostctrl/exe/
export SECUDIR=/usr/sap/hostctrl/exe/sec
cd /usr/sap/hostctrl/exe
○ (Windows) Use the following command to set the SECUDIR environment variable:
Set up SECUDIR as an absolute path to avoid trouble with the sapgenpse tool.
4. Logged in as the sapadmn user, execute the following using a fully qualified domain name as your host
name (such as myhost.wdf.sap.corp). This command creates a server PSE file named SAPSSLS.pse,
which authenticates myhost.wdf.sap.corp for incoming SSL connections. Access to this file will require a
password; include the -r option to direct the certificate-signing request to a file; omit it if you plan to copy
and paste the certificate signing request (CSR) into a web formula:
○ (Linux)
○ (Windows)
5. Grant the server PSE access to the sapadm operating system user:
○ (Linux)
○ (Windows)
Note
When the PKCS#7 format is used for signing the certificate, the default name of the certificate file is
myhost.p7b.
○ (Windows)
%PROGRAMFILES%\SAP\hostctrl\exe\sapgenpse.exe import_own_cert -p
SAPSSLS.pse -x <password> -c myhost.p7b
○ (Linux)
○ (Windows)
○ (Linux)
○ (Windows)
%PROGRAMFILES%\SAP\hostctrl\exe\sapgenpse.exe export_own_cert -p
SAPSSLS.pse -x <password> -r -f x509 -o <serverCA>.cer
Next Steps
Note
For detailed commands and instructions to create a PSE for the SAP Host Agent on UNIX, Windows and
IBMi environments, see SSL Configuration for the SAP Host Agent under SAP NetWeaver AS for ABAP
innovation package.
Use the SAP GUI installer (run the installer without the response file) to install the Fault Manager, and enter the
required parameters manually.
Perform this task to create a personal security environment (PSE), SAPSSLC.pse, for the client () on the Fault
Manager host for the SAP Host Agent.
Procedure
LD_LIBRARY_PATH=$SYBASE/Faultmanager/lib:$LD_LIBRARY_PATH
SECUDIR=$SYBASE/FaultManager/sec
cd $SYBASE/FaultManager/
Where $SYBASE is release area of the Fault Manager as generated by the installer.
3. Create the client PSE, SAPSSLC.pse, and the certificate signing request (CSR):
○ (Linux)
○ (Windows)
Note
Use the fully qualified domain name (FQDN) as your host name. For example:
myhost.wdf.sap.corp.
4. Configure the user with permission to start and access the Fault Manager without a password:
○ (Linux)
○ (Windows)
When the PKCS#7 format is used for signing the certificate, the default name of the certificate file
is myhost.p7b.
○ (Linux)
○ (Windows)
○ (Linux)
○ (Windows)
8. Import the certificate you exported after you created the server PSE. See task Create a Server PSE for the
SAP Host Agent on Database Hosts [page 227] for more information.
○ (Linux)
○ (Windows)
Perform this task to create a server personal security environment (PSE), SAPSSLS.pse, on the Fault Manager
host for the Heartbeat client.
Procedure
1. Set up the shared library search path (LD_LIBRARY_PATH, LIBPATH or SHLIB_PATH) and SECUDIR
environment variables, and change to the exe directory of SAP Host Agent as you did in when you created
the server PSE for the SAP host agent on the database host.
2. Create the server PSE, SAPSSLS.pse, and the certificate signing request (CSR). Run the following
command as the FM OS user so that the created files are owned by this user:
○ (Linux)
○ (Windows)
Note
Use the fully qualified domain name (FQDN) as your host name. For example:
myhost.wdf.sap.corp.
○ (Linux)
○ (Windows)
Note
When the PKCS#7 format is used for signing the certificate, the default name of the certificate file
is myhost.p7b.
○ (Linux)
○ (Windows)
○ (Linux)
○ (Windows)
○ (Linux)
○ (Windows)
Perform this task to create a client personal security environment (PSE), SAPSSLC.pse, for the heartbeat
client on database hosts, by using the sapgenpse binary located on the Fault Manager host.
Context
● Execute all commands in this task as the root user (for Windows, this is the user that has Administrator
privileges).
● The sapgenpse binary is located in the $SYBASE/FaultManager/bin/sapgenpse directory on the Fault
Manager host. Do not use the binary located in /usr/sap/hostctrl/exe/.
1. Create a directory to save the client PSE in. Perform one of:
mkdir /usr/sap/hostctrl/exe/sec/<CID>/hb/sec
○ Issue this command, specifying a different path to save the Heartbeat client PSE in than the default
path, specify the path in the <hb_secudir> parameter (ha/syb/hb_secudir = <hb_secudir>) in
the Fault Manager profile file, then execute the following:
mkdir <hb_secudir>
export LD_LIBRARY_PATH=/usr/sap/hostctrl/exe/
export SECUDIR=/usr/sap/hostctrl/exe/<CID>/hb/sec
Note
Set up SECUDIR as an absolute path in order to avoid trouble with the sapgenpse tool.
3. Create the client PSE, SAPSSLC.pse, and the certificate signing request (CSR). Run the following
command as the sapadm user so that the created files are owned by this user:
○ (Linux)
○ (Windows)
Note
Use the fully qualified domain name (FQDN) as your host name. For example:
myhost.wdf.sap.corp.
4. Configure the user with permission to start and access the Fault Manager without a password (the default
user is root):
○ (Linux)
○ (Windows)
Note
When the PKCS#7 format is used for signing the certificate, the default name of the certificate file
is myhost.p7b.
○ (Linux)
○ (Windows)
%PROGRAMFILES%\SAP\hostctrl\exe\sapgenpse.exe import_own_cert -p
SAPSSLC.pse -x
<password> -c myhost.p7b
○ (Linux)
○ (Windows)
8. Import both certificates that you generated when you created the server PSE on the Fault Manager host for
the heartbeat client:
○ (Linux)
○ (Windows)
/usr/sap/hostctrl/exe/saphostexec -restart
Procedure
ha/syb/db_ssl=0
// Enable/disbable SSL at the database-level (1 to enable, 0 to disable; in
this sample it is set to 1 (enabled)
ha/syb/db_ssl_certificate
// Path to the trusted.txt file
ha/syb/primary_ssl_dbport
// SSL-enabled Database port for primary
ha/syb/standby_ssl_dbport
// SSL-enabled Database port for standby
ha/syb/ssl=1
// SAP Host Agent is SSL-enabled. When set to 0, it is not SSL-enabled.
ha/syb/hb_ssl=1
// When set to 1, Heartbeat communication is SSL-enabled; when set to 0, it
is not SSL-enabled.
ha/syb/ssl_anon=1
// SAP Host Agent is SSL-enabled, but client certificate not sent/not
verified (anonymous)
ha/syb/hb_ssl_anon=1
// When set to 1, Heartbeat communication is SSL-enabled, but the client
certificate not sent/not verified (anonymous)
DIR_LIBRARY = <work dir>
// Location of SAP crypto libraries on the Fault Manager host
DIR_INSTANCE = <work dir>
// Location of the ‘sec’ directory on the Fault Manager host
ha/syb/secudir = <secudir>
// Path to the Server PSE on the Fault Manager host
2. (Optional) Set the custom path to the heartbeat client on the database hosts with the following parameter:
ha/syb/hb_secudir = <hb_secudir>
// Custom path to the Heartbeat client PSE on the database hosts
Prerequisites
Data is replicated, but failover does not occur, if a database in the primary companion is encrypted but is not
encrypted in the standby companion. If the primary database is encrypted, the database on the secondary
companion must be encrypted with the same keys.
Note
For Business Suite, you have to set up encryption before setting up HADR as the setup of standby with
Software Provisioning Manager (SWPM) will copy the same encryption keys to the standby host.
To encrypt a database:
Procedure
2. Configure the primary and secondary companions for the number of worker processes:
d. Query sysencryptkeys for the master key (you will need the value of the eksalt, value, status
columns, eksalt 01000e7662f4d97ac74d01 below, for the secondary companion):
e. Query sysencryptkeys for the database encryption key (you will need this information for the
secondary companion, 0100908f61ef71ebe29c01 below):
b. Create the encryption key for database encryption using the eksalt,value,status values from the
primary companion, 0100908f61ef71ebe29c01 below:
create encryption key dek_db1 for AES for database encryption with
keylength 256 passwd 0x0100908f61ef71ebe29c01 init_vector
random keyvalue
0x3ab16e1b0684d6b7b7b3916cb4fb839ee241ec30cfdd686a637af3934c4c8430b80bb6263
c9c4ad5e9d2b148fc50e37c01 keystatus
2049
c. Use the dbencryption_status function to check the status of the encryption (database encryption
is a background process):
dbencryption_status('status',db_id('<database_ID>'))
6. Update the SAP ASE runserver file or Windows Service with the --master-key-passwd=<password>
parameter for the database encryption key, or to configure the primary and secondary companions for
encryption with automatic_startup.
Note
2. Alter the master key to use encryption when SAP ASE starts:
alter encryption key master with passwd <password> add encryption for
automatic_startup'
Note
The database encryption key is named sapdbkey by default. See SAP Note 2224138: https://
launchpad.support.sap.com/#/notes/0002224138 .
You can replicate data, including stored procedures and SQL statements, from an existing HADR system to an
external system, or into an HADR system from an external system.
There are a number of requirements and restrictions when replicating data from an HADR system to an
external system and vice versa.
● The HADR system and the external SAP Replication Server must use the same platform.
HADR systems with external replication can replicate data from the external replication system to the HADR
system, and vice versa.
An external replication system includes all components in a replication system except the current HADR
system with which you are working. The system can be an SAP Replication Server that contacts either an
HADR system or an SAP ASE server.
The HADR system contains a primary companion and a secondary companion, and each companion includes
an SAP ASE and an SAP Replication Server.
In this architecture, the external SAP Replication Server replicates data to the primary SAP ASE server in the
HADR system. The primary server is available to the external system for replication even if all other servers in
the HADR system are down, because the external system replicates into the primary server without going
through the SAP Replication Server inside the HADR system.
Even though the primary SAP ASE server is exposed to the external replication system, the details of the HADR
system, such as routes and subscriptions are all hidden from the external system. The failover steps within the
HADR system do not change when the external replication system replicates data into this HADR system.
The Data Server Interface (DSI) from the external Replication Server is perceived as a regular user in the
primary companion, and is redirected to the current active companion during failover.
Replicating Data out from the HADR System into Replication Server
In this architecture, the active SAP Replication Server (the server on the secondary companion) replicates data
to the external replication system by using the embedded SPQ Agent.
Even though the active Replication Server is exposed to external replication system for configuring the SPQ
Agent, the details of the HADR system such as routes and subscriptions are all hidden from the external
system. After a failover, drain the previous, active SPQ Agent to the external SAP Replication Server before
starting a new active SPQ Agent.
The steps for configuring HADR with an external system are different depending on if you are replicating data
from the external system to the HADR or from the HADR to an external system.
● The data server interface (DSI) from the external Replication Server to the primary companion in HADR
must enable dsi_replication_ddl. If it is disabled (set to off), the DSI applies DDLs using its original
user, and if the original user has sa or allow HADR login permission, the connection may not be
redirected to the primary companion. With dsi_replication_ddl enabled, the maintenance user
executes the DDL and login redirection ensures that the DDL is applied to the primary companion.
The maintenance user from the external Replication Server must have set proxy authorization
permissions to replicate DDL with dsi_replication_ddl since the DSI connection uses set user to
perform DDL operations.
● If your site requires bidirectional replication for external replication, you need to disable
dsi_replication for the data server interface (DSI) from the external SAP Replication Server to the
primary companion in HADR. By disabling dsi_replication, the DSI issue set replication off
when applying DDLs as the maintenance user when it is connected to the primary companion, therefore
DDLs issued by the maintenance user are filtered out and do not return to the site they were originally
executed.
Note
Using RMA to set up the HADR system disables dsi_replication in the HADR system automatically.
● The maintenance user for DSI connections to HADR cannot use the same name as the maintenance user
in the HADR system (for example, DR_maint, which is the default HADR maintenance user name).
● The maintenance user for DSI connections into HADR cannot be named DR_admin. HADR uses this name
for its DR administrator.
● After a failover, the SPQ Agent associated with the newly active Replication Server must wait for SPQ Agent
from the previously active Replication Server to drain its data before it can start replicating. If a second
failover occurs before the drain is completed during this wait period, external replication may suffer a data
loss data and must be rematerialized or resolved using Data Assurance (or using manual methods) in
situations where rematerialization is not possible.
● Transactions made by DR_admin are not replicated to the external system. The external system cannot
use the DR_admin user to create, drop, or alter subscriptions, and any marker issued by DR_admin is
replicated only within the HADR system.
● The version of the external SAP Replication Server must be the same as or later than the version of the two
SAP Replication Server servers and the two SAP ASE servers within the HADR system. Otherwise,
replication out of HADR fails.
● If you set up HADR with a login other than the default HADR administrator user (DR_admin), use the
configure replication server or alter connection command with the
cap_filter_dr_admin_name parameter to change the default administrator user name to your own
login name. This ensures that the external Replication Server can filter out all commands issued under
your login name. Otherwise, data inconsistency may happen, causing DSI to go down.
● When replicating data into an HADR system, dropping the connection to the HADR system does not delete
the system tables created by the external maintenance user in the primary companion automatically.
You can use Replication Server to create a connection to the external database as a primary source.
Prerequisites
Replication Server is installed, and is managing the external database. See the Replication Server installation
guide for your platform if you have not yet installed Replication Server.
1. Add an entry to the Replication Server interfaces file for the primary and companion SAP ASE server in the
HADR system. This example includes entries for the primary and companion servers:
SFSAP1
query tcp ether SFMACHINE1 5000
query tcp ether SJMACHINE2 5000
2. Create a maintenance user in the primary SAP ASE server in the HADR system. The maintenance user for
DSI connections to HADR cannot use the same name as the maintenance user in the HADR system, or the
name of the default user who executes RMA commands (DR_maint and DR_admin).
a. Create the login. This example creates the pubs2_maint login:
use pubs2
go
sp_adduser pubs2_maint
go
c. To ensure this login is always redirected to the active companion, it cannot have the sa_role,
sso_role, or the allow HADR login and HADR admin role permissions. Use sp_displayroles
to display the role information. For example:
sp_displayroles pubs2_maint
Role Name
------------------------------
sa_role
replication_role
replication_maint_role_gp
sap_maint_user_role
Use the revoke command to remove roles and permissions. For example:
d. Create a role for external replication. In this example, the role is named
external_replication_role:
e. Grant set proxy permissions to the external_replication_role, but restrict it from switching
to the sa_role, sso_role, and mon_role roles:
i. Since the maintenance user is not aliased as the database owner (dbo), manually grant it the following
permissions on the tables:
Note
You need the SECDIRS license to grant the following permissions to the maintenance user.
use pubs2
grant delete any table to replication_role
go
grant create any table to replication_role
go
grant create any procedure to replication_role
go
grant execute any function to replication_role
go
grant execute any procedure to replication_role
go
grant identity_insert any table to replication_role
go
grant identity_update any table to replication_role
go
grant insert any table to replication_role
go
grant select any system catalog to replication_role
go
grant select any table to replication_role
go
grant truncate any table to replication_role
go
grant update any table to replication_role
go
3. Issue this command from the Replication Server that replicates into the HADR system:
For example:
4. Perform the following to replicate data from the external database to the primary companion's database:
○ Create table- or database-level replication definitions (repdefs) using the primary as the external
database. This example creates the pubs_rep replication definition for the pubs2.publishers table:
○ Create subscriptions for the replication definition with replicates at the HADR database. This example
creates the pubs_sub subscription:
You can also replicate stored procedures and SQL statements into the HADR system. The procedure of
replicating them in stream replication is the same as the procedure for Log Transfer Language (LTL)
replication. See Replication Server Administration Guide Volume 1 > Manage Replicated Functions > Use
Replicated Functions and Replication Server Administration Guide Volume 2 > Performance Tuning > SQL
Statement Replication for details.
When you replicate a request stored procedure and the select into statement, grant the execute
privilege on the following stored procedures to the original user who executes them in the primary
companion's database:
○ <maintenance_user>.rs_update_last_commit
○ <maintenance_user>.rs_get_lastcommit
○ <maintenance_user>.rs_get_thread_seq
○ <maintenance_user>.rs_initialize_threads
○ <maintenance_user>.rs_syncup_lastcommit
○ <maintenance_user>.rs_ticket_report
○ <maintenance_user>.rs_update_threads
The <maintenance_user> is the maintenance user for DSI connections to HADR, and not the
maintenance user in the HADR system.
Next Steps
To configure the system to replicate bidirectionally, set up replication both into and out of the HADR cluster.
When you create a primary connection from an HADR system, SPQ Agent is enabled and configured on the
active Replication Server. The SPQ Agent on the active Replication Server reads data from its SPQ and sends it
to external system.
Prerequisites
Replication Server is installed, and is managing the external database. See the Replication Server installation
guide for your platform if you have not yet installed Replication Server.
Procedure
1. Create a connection to the external SAP ASE database as the replicate target.
2. Create a maintenance user in the primary SAP ASE server in the HADR system. The maintenance user
cannot be named DR_maint or DR_admin, which are default HADR maintenance user names.
a. Create the login. This example creates the pubs2_maint login:
b. To ensure this login is always redirected to the active companion, it cannot have the sa_role,
sso_role, or the allow HADR login and HADR admin role permissions. Use sp_displayroles
to display the role information. For example:
sp_displayroles pubs2_maint
Role Name
------------------------------
sa_role
replication_role
replication_maint_role_gp
sap_maint_user_role
Use the revoke command to remove roles and permissions. For example:
c. Create a role for external replication. In this example, the role is named
external_replication_role:
d. Grant set proxy permissions to the external_replication_role, but restrict it from switching
to the sa_role, sso_role, and mon_role roles:
h. Grant permissions to the maintenance user on the tables; this user is aliased as the database owner.
Issue commands similar to the following to grant the correct permissions to the maintenance user on
your system.
Note
You need the SECDIRS license to grant the following permissions to the maintenance user.
3. Create the same maintenance user on the active and standby HADR Replication Server and grant manage
spq_agent privileges to this user:
4. Create the SPQ Agent user on the external Replication Server, and grant the connect source role to this
user. HADR Replication Server uses this user to connect to the external Replication Server. You need to
create SPQ Agent user for each database that would participate in the external Replication. For example:
5. Add an entry to the external Replication Server interfaces file for the primary and companion SAP ASE
servers in the HADR system. This example includes entries for the primary and companion servers:
SFSAP1
query tcp ether SFMACHINE1 5000
query tcp ether SJMACHINE2 5000
6. Issue this command from the external Replication Server to create a primary connection from HADR
system:
For example:
Note
○ If you specify with primary only parameter, the data server interface (DSI) is disabled, and
outbound queues and their stable queue management (SQM) and DSI threads are not started.
○ The command configures an SPQ Agent in the active Replication Server in the HADR system that
reads data from the SPQ for the intended database, and forwards the data to the external
Replication Server.
○ When the command completes, the SPQ Agent connects to the external Replication Server.
7. Perform the following to replicate data from the HADR database to the external database:
○ Create table- or database-level replication definitions (repdefs) with the primary at the HADR
database. This example creates the pubs_rep replication definition for the pubs2.publishers table:
○ Create subscriptions for the replication definition with replicates at the replicate database. This
example creates the pubs_sub subscription:
You can also replicate stored procedures and SQL statement from the HADR system to the external
database. The procedure of replicating them in stream replication is the same as the procedure for LTL
replication. See Replication Server Administration Guide Volume 1 > Manage Replicated Functions > Use
Replicated Functions and Replication Server Administration Guide Volume 2 > Performance Tuning > SQL
Statement Replication for details.
Next Steps
Note
To configure the system to replicate bidirectionally, set up replication both in and out to the HADR cluster.
Adding HADR to an primary SAP ASE with replication should not damage any existing replication. Although it
may pause replication for a short time, it should not cause data loss or duplicates in the existing replication
system.
This task assumes a system that includes a Replication Server that is replicating data between two SAP ASE
servers. The data is replicating from the primary to the target server. This task migrates the primary SAP ASE
to an HADR system.
Prerequisites
These steps describe the migration from a dataserver DS_P that currently replicates data to target dataserver
DS_T using Replication Server EX_RS. These steps add high availability capability to dataserver DS_P by
migrating it to an HADR system. Since DS_P already replicates data to dataserver DS_T, it already has a
connection to the Replication Server EX_RS, and assume the maintenance user for this connection is
pubs2_maint, and that it has the replication definitions and subscriptions in Replication Server EX_RS.
1. Shut down all applications connected to the primary SAP ASE (DS_P in this example). Do not shut down
SAP ASE and Replication Server.
2. Drain the primary SAP ASE logs. Issue rs_ticket on the primary SAP ASE (DS_P) and then check it on
the target SAP ASE (DS_T in this example):
When the ticket reaches the target SAP ASE (DS_T), the logs are drained. See Checking Latency with
rs_ticket [page 339].
3. Stop the Replication Agent from the primary SAP ASE (DS_P). Log in using isql and issue a command
similar to (this stops the Replication Agent running on the pubs2 database):
use pubs2
go
sp_stop_rep_agent pubs2
go
4. Check the maintenance user roles. It cannot have the sa_role or sso_role roles, or have the allow
HADR login permission.
Use rs_helpuser to check if the maintenance user is aliased to dbo. For example:
rs_helpuser pubs2_maint
Use sp_displayroles to display the role information. For example, this checks the pubs2_maint role:
sp_displayroles pubs2_maint
Role Name
------------------------------
sa_role
replication_role
replication_maint_role_gp
sap_maint_user_role
Use the revoke command to remove roles and permissions. For example:
5. Use the SAP GUI or the setuphadr utility to migrate the primary SAP ASE (DS_P) to an HADR system. See
Installing HADR with an Existing System [page 85].
6. Add an entry to the external Replication Server (EX_RS in this example) interfaces file for the companion
SAP ASE server.
DS_P
query tcp ether SFMACHINE1 5000
query tcp ether SJMACHINE2 5000
7. Create the external maintenance user (pubs2_maint in this example) on the primary and standby
Replication Servers in the HADR system, and grant this user manage spq_agent permissions.
You need to use the password which is created when creating the connection between the primary SAP
ASE and the external Replication Server.
8. Log in external Replication Server and create the SPQ Agent user and grant this user the connect
source role. The Replication Servers use this user to connect to external Replication Server.
9. Issue this command on the external Replication Server (EX_RS) to enable replication to the external
database (pubs2 in this example):
Results
When you finish migrating the primary SAP ASE server to an HADR system, data replication — including the
replication of stored procedures and SQL statements, if available before the migration — continues from the
HADR system to the replicate SAP ASE server.
This task assumes a system that includes a Replication Server that is replicating data between two SAP ASE
servers. The data is replicating from the primary to the target server. This task migrates the target SAP ASE to
an HADR system.
Prerequisites
These steps describe the migration from a dataserver DS_P that currently replicates data to target dataserver
DS_T using Replication Server EX_RS. These steps add high availability capability to dataserver DS_T by
migrating it to an HADR system. Since DS_P already replicates data to dataserver DS_T, it already has a
connection to the Replication Server EX_RS, and assume the maintenance user for this connection is
pubs2_maint, and that it has the replication definitions and subscriptions in Replication Server EX_RS.
When the ticket reaches the target SAP ASE (DS_T), the logs are drained. See Checking Latency with
rs_ticket [page 339].
3. Stop the Replication Agent from the primary SAP ASE (DS_P). Log in to the primary SAP ASE using isql
and issue a command similar to (this stops the Replication Agent running on the pubs2 database):
use pubs2
go
sp_stop_rep_agent pubs2
go
4. Suspend the DSI connection to the replicate database (pubs2 in this example):
5. Log in as user sa, and drop the pubs2_maint alias as dbo in replicate database, and add it as a user
pubs2_maint (this example assumes you are using the pubs2 database):
use master
go
grant set proxy to pubs2_maint
go
use pubs2
go
sp_dropalias pubs2_maint
go
sp_adduser pubs2_maint
go
grant all to pubs2_maint
go
6. Load and run the rs_install_replicate.sql file as pubs2_maint to create replication system tables
and procedures in the replicate database, such as pubs2_maint.rs_lastcommit,
pubs2_maint.rs_threads, or rs_update_lastcommit.
7. Log in using isql and issue these commands as pubs2_maint in the replicate database to copy the data
from dbo.rs_lastcommit into pubs2_maint.rs_lastcommit:
8. Use the SAP GUI or the setuphadr utility to migrate the target server to an HADR system. See Installing
HADR with an Existing System [page 85].
DS_T
query tcp ether SFMACHINE1 5000
query tcp ether SJMACHINE2 5000
10. Issue this command on the external Replication Server (EX_RS) to enable replication from the primary SAP
ASE to the HADR system:
For example:
11. Start the Replication Agent on the primary SAP ASE (DS_P).
use pubs2
go
sp_start_rep_agent pubs2
go
12. (Optional) Data replication — including stored procedures and SQL statements, if available before the
migration — continues from the primary SAP ASE server to the HADR system. However, to replicate a
request stored procedure and the select into statement from the primary SAP ASE server to the HADR
system, grant the execute privilege on these stored procedures to the original user who executes them in
the primary companion's database:
○ <maintenance_user>.rs_update_last_commit
○ <maintenance_user>.rs_get_lastcommit
○ <maintenance_user>.rs_get_thread_seq
○ <maintenance_user>.rs_initialize_threads
○ <maintenance_user>.rs_syncup_lastcommit
○ <maintenance_user>.rs_ticket_report
○ <maintenance_user>.rs_update_threads
The <maintenance_user> is the maintenance user for DSI connections to HADR, and not the
maintenance user in the HADR system. For example, <maintenance_user> used in this task would be
pubs2_maint.
Removing external replication includes dropping all subscriptions, replication definitions (repdefs), and
connections from the external Replication Server.
Dropping the connections also disables the SPQ Agent at the active HADR Replication Server in systems that
replicate data out from an HADR system.
You must drop the subscriptions and replication definitions before you drop the connections. The SPQ Agent
on the active HADR Replication Server is shutdown when you drop the connection for replicating data out.
1. Drop the subscriptions and replication definitions for the external database. For example, this removes the
subscriptions and replication definitions for the pubs2 database in the external data server, LASAP1:
This drops a replication definition named pubs2_def and any function strings that exist for it:
2. Drop the connections. This example drops the connections to the pubs2 database in the HADR system:
This task tears down an HADR system which replicates data to an external system, but maintains the
replication.
Context
In an HADR system which replicates data out to external replication system, when the secondary companion is
down, you can migrate the HADR system to a standalone SAP ASE and connect the external system with the
Replication Agent thread on the standalone SAP ASE.
sap_teardown
2. Remove companion SAP ASE host and port information from the interface file of the external Replication
Server.
3. Run the rs_install_primary.sql script manually (located in $SYBASE/DM/REP_16_0/scripts) as
DR_admin to install the rs_ticket command.
4. Log into primary SAP ASE and reconfigure the Replication Agent:
5. Drain the data from the primary SAP ASE. Issue rs_ticket on the primary SAP ASE and then check it on
the target SAP ASE:
When the ticket reaches the target SAP ASE, the data is drained.
6. Alter the connection from HADR to the external Replication Server:
7. Shut down the secondary companion in the HADR system. Log in with isql and issue:
shutdown
8. Shut down the inactive Replication Server (running on the original primary site) in the HADR system. Log in
with isql and issue:
shutdown
9. Use sap_teardown to ensure that Replication Server is shut down. If Replication Server is not shut down,
use the kill command to terminate the process.
10. Suspend and resume log transfer:
This task tears down an HADR system into which an external system replicates data, but maintains the
replication.
Procedure
1. Log into the RMA you are tearing down and execute:
sap_teardown
2. Remove companion SAP ASE host and port information from the interface file.
3. Alter the connection to no longer connect to the HADR system:
4. Restart the DSI connection to the original HADR system. For example:
5. Shut down the secondary companion in the HADR system. Log in with isql and issue:
shutdown
6. Use sap_teardown to ensure that Replication Server is shut down. If Replication Server is not shut down,
use the kill command to terminate the process.
Manage an external replication system by disabling or enabling replication, performing failover, monitoring the
system, and so on.
Disabling replication to an external Replication Server requires that RMA stops its SPQ Agent and disables the
secondary truncation point.
Issue this command to disable replication from a specific database to the external Replication Server:
sap_disable_external_replication <dbname>
This command stops the SPQ Agent in the active Replication Server and disables its second truncation point in
SPQ so that it can be truncated properly. However, this command may result in data loss.
To enable replication from a specific database to the external Replication Server, enter this at the command
line:
sap_enable_external_replication <dbname>
You must manually rematerialize the databases after enabling external replication. Se Rematerializing
Databases for External Replication [page 257]
Note
This topic only talks about disabling and enabling replication paths to the external Replication Server, for
disabling and enabling replication paths inside an HADR system, refer to Suspending, Resuming, Enabling,
and Disabling Databases [page 309]
Under certain circumstances, databases may lose synchronization in the external replication system. You need
to rematerialize the databases to resynchronize data.
This section describes how to rematerialize databases for the following scenarios:
To rematerialize databases inside an HADR system, see Materializing and Rematerializing Databases [page
301].
When replicating data from an HADR system to the external replication system, perform the rematerialization
steps manually to rematerialize databases in the external replication system.
Prerequisites
If you are rematerializing the databases from an HADR system to another HADR system, you need to shut
down all applications before rematerializing the databases. Do not start the applications until the
rematerialization is complete.
Context
You can use this method to rematerialize data only from a single database in the HADR system to a single
database in the external replication system.
When HADR databases are replicating data to an external system, disabling the replication path from the
primary SAP ASE server to the active SAP Replication Server disrupts the process, and also requires you to
rematerialize the external databases.
Procedure
1. Log in to RMA on the active Replication Server in the HADR system and disable the replication from the
databases to the external replication environment. To disable the replication for one specific database,
specify the <database_name> in the command, to disable the replication for all databases, execute the
command without the <database_name> variable:
sap_disable_external_replication [<database_name>]
go
2. Log in to RMA on the active Replication Server in the HADR system and enable the replication from the
databases to the external replication environment. To enable the replication for one specific database,
specify the <database_name> in the command, to enable the replication for all databases, execute the
command without the <database_name> variable:
sap_enable_external_replication [<database_name>]
go
3. (Optional) Log in to RMA on the active Replication Server in the HADR system and check if the paths inside
the HADR system are still activated:
sap_status path
go
sap_disable_replication <primary_host_logical_name>,
<companion_host_logical_name>, <database_name>
go
b. Log in to the external Replication Server and suspend log transfer from the HADR system.
c. Log in to the external Replication Server and hibernate on the external Replication Server:
sysadmin hibernate_on
go
Tip
You can choose to run the sysadmin sqm_purge_queue command to purge queues, without
necessarily hibernating on the Replication Server. Instead, you can suspend the appropriate
modules in the Replication Server, and then purge queues as usual. Running sysadmin
sqm_purge_queue with the [, check_only] parameter facilitates this scenario, as it checks
and reports if the appropriate modules were suspended successfully (it does not purge queues),
thus enabling you to make an informed decision before purging queues. Note that you can
continue to purge queues like you did before – by hibernating on the Replication Server. For more
information, see the Usage section under SAP Replication Server Reference Manual > SAP
Replication Server Commands > sysadmin sqm_purge_queue.
Note
Use sysadmin sqm_purge_queue to purge both the inbound queue and the outbound queue. To
purge the inbound queue, set <q_type> to 1. To purge the outbound queue, set <q_type> to 0.
e. Log in to the external Replication Server and hibernate off the external Replication Server:
sysadmin hibernate_off
go
f. Resume the connection from the HADR system to the external Replication Server and from the
external Replication Server to the external database:
g. Without purging, drop the existing subscriptions in the external Replication Server:
h. Create a temporary user (<temp_remater_maint_user>) on the primary SAP ASE server in the
HADR system, and grant it replication role and all permissions. This automatically creates the user on
the companion SAP ASE server. The temporary user is used to define the subscription on the external
Replication Server.
use master
go
create login <temp_remater_maint_user> with password <password>
go
grant role replication_role to <temp_remater_maint_user>
go
use <dbname>
go
sp_adduser <temp_remater_maint_user>
go
grant all to <temp_remater_maint_user>
go
i. Create the temporary user (<temp_remater_maint_user>) on the external SAP Replication Server
and grant it sa role.
k. Log in to the primary server in the HADR system as an sa user and dump the database.
l. Log in to the external Replication Server to check that the subscription is valid:
use master
use master
go
kill <spid>
go
n. Load the database to the active data server in the external replication system with the sa user:
use <database_name>
go
truncate table dbo.rs_lastcommit
go
truncate table dbo.rs_threads
go
truncate table dbo.rs_ticket_history
go
truncate table dbo.rs_mat_status
go
truncate table dbo.rs_dbversion
go
q. (Optional) If you used two different external maintenance users to create connections from the HADR
system to the external Replication Server and from the external Replication Server to the external
database. On the external database, drop the external maintenance user for the HADR system, then
add the external maintenance user for the external database and grant all permissions to it.
use <database_name>
go
sp_dropuser <ext_maint_user_a>
go
sp_adduser <ext_maint_user_b>
go
grant all to <ext_maint_user_b>
go
r. (Optional) When the external database is an HADR system, create the system tables and procedures
for the external maintenance user. Log in to the primary server of the HADR system in the external
environment and load the rs_install_replicate.sql file:
s. The connection between the external SAP Replication Server and the external database becomes
suspended after you dump the database. Resume the connection:
When replicating from an SAP ASE server to an HADR system, perform the rematerialization steps manually to
rematerialize all databases in the SAP ASE server.
Prerequisites
Shut down all applications before rematerializing the databases. Do not start the applications until the
rematerialization is complete.
Procedure
1. Log in to RMA on the active Replication Server in the HADR system and disable the replication from the
databases to the HADR system. To disable the replication for one specific database, specify the
<database_name> in the command, to disable the replication for all databases, execute the command
without the <database_name> variable:
sap_disable_replication [<database_name>]
go
2. Log in to RMA on the active Replication Server in the HADR system and enable the replication from the
databases to the HADR system. To enable the replication for one specific database, specify the
<database_name> in the command, to enable the replication for all databases, execute the command
without the <database_name> variable:
sap_enable_replication [<database_name>]
go
3. Manually rematerialize the replication path from a specific database to the HADR system. If you want to
rematerialize all databases in the SAP ASE server, repeat the configurations under this step to
rematerialize each database one by one.
a. Log in to the RMA on the primary server of the HADR system and disable the replication path from the
primary server to the companion server:
sap_disable_replication <primary_host_logical_name>,
<companion_host_logical_name>, <database_name>
go
b. Log in to the external Replication Server and suspend log transfer from the database.
c. Log in to the external Replication Server and hibernate on the external Replication Server:
sysadmin hibernate_on
go
Tip
You can choose to run the sysadmin sqm_purge_queue command to purge queues, without
necessarily hibernating on the Replication Server. Instead, you can suspend the appropriate
modules in the Replication Server, and then purge queues as usual. Running sysadmin
sqm_purge_queue with the [, check_only] parameter facilitates this scenario, as it checks
and reports if the appropriate modules were suspended successfully (it does not purge queues),
thus enabling you to make an informed decision before purging queues. Note that you can
continue to purge queues like you did before – by hibernating on the Replication Server. For more
information, see the Usage section under SAP Replication Server Reference Manual > SAP
Replication Server Commands > sysadmin sqm_purge_queue.
Note
Use sysadmin sqm_purge_queue to purge both the inbound queue and the outbound queue. To
purge the inbound queue, set <q_type> to 1. To purge the outbound queue, set <q_type> to 0.
e. Log in to the external Replication Server and hibernate off the external Replication Server:
sysadmin hibernate_off
go
f. Resume the connection from the database to the external Replication Server and from the external
Replication Server to the HADR system:
g. Without purging, drop the existing subscriptions in the external Replication Server:
use master
go
create login <temp_remater_maint_user> with password <password>
go
grant role replication_role to <temp_remater_maint_user>
go
use <dbname>
go
sp_adduser <temp_remater_maint_user>
go
grant all to <temp_remater_maint_user>
go
i. Create the temporary user (<temp_remater_maint_user>) on the external SAP Replication Server
and grant it sa role.
k. Log in to the SAP ASE server as an sa user and dump the database.
l. Log in to the external Replication Server to check that the subscription is valid:
use master
go
select spid from sysprocesses where dbid = db_id ('<database_name>')
go
use master
go
kill <spid>
go
p. Truncate the following dbo system tables on the primary data server in the HADR system:
use <database_name>
go
truncate table dbo.rs_lastcommit
go
truncate table dbo.rs_threads
go
truncate table dbo.rs_ticket_history
go
truncate table dbo.rs_mat_status
go
truncate table dbo.rs_dbversion
go
q. (Optional) If you used two different external maintenance users to create connections from the SAP
ASE to the external Replication Server and from the external Replication Server to the HADR system. In
the HADR system, drop the external maintenance user for the SAP ASE, then add the external
maintenance user for the HADR system to the database and grant all permissions to it.
use <database_name>
go
sp_dropuser <ext_maint_user_a>
go
sp_adduser <ext_maint_user_b>
go
grant all to <ext_maint_user_b>
go
r. Create the system tables and procedures for the external maintenance user. Log in to the primary
server of the HADR system and load the rs_install_replicate.sql file:
s. The connection between the external SAP Replication Server and the HADR system becomes
suspended after you dump the database. Resume the connection:
t. Rematerialize the database on the companion server in the HADR system. See Materializing and
Rematerializing Databases [page 301] for more information.
The SPQ Agent is a Replication Server component that reads the Simple Persistent Queue (SPQ), and forwards
the messages to the external system, thus acting as a Replication Agent to the external system.
● The SPQ Agent path and thread states. It indicates if the path is configured for the SPQ Agent.
● The SPQ Agent backlog size.
Run the sap_status spq_agent command in an active HADR system that has external replication
configured. Internally, for each registered host, the RMA uses these two Replication Server commands, and
merges the results:
Note
The sap_status spq_agent command provides basic information on the SPQ Agent, such as SPQ Agent
state and its backlog size. To get additional details, use internal Replication Server commands such as
admin who,spqra.
Sample Code
If BACKLOG displays N/A, and STATE displays INACTIVE, they indicate that the SPQ Agent on this path is not
configured, and there is no functional external replication on this path.
Failover switches activities to the standby node when the primary node is unavailable.
You can perform planned and unplanned failovers within an HADR cluster with external replication. The failover
happens between the primary node and the standby node within the HADR system, and does not impact the
external replication system.
When data is replicating from an external replication system to HADR, the failover process is the same as that
of an independent HADR cluster. See Planned and Unplanned Failovers [page 321] for details.
This chapter focuses on how to perform a failover when the data is replicating from an HADR cluster to an
external replication system, with examples using the following values:
A planned failover to the standby node allows you to perform regular maintenance work and scheduled system
upgrades on the primary node without affecting replication to the external system.
Prerequisites
Suspend customer application activities against the primary database to ensure a clean transition to the
standby site, sparing client applications from reacting to the server downtime.
Procedure
1. Log in to the primary RMA and run a command similar to the following. This example uses a timeout of 120
seconds:
sap_failover is an asynchronous command and must complete before you perform the next step. Use
the sap_status command to check the failover status, indicated in the example output in bold text.
sap_status
go
When sap_failover has finished, the SAP ASE server on the former standby node becomes the new
primary server. It is only activated after all transaction backlogs on the former primary node are drained to
the former standby node (current primary). Client applications can connect to the former standby node to
continue business activities.
2. Run the following command to check whether all transaction backlogs are drained from the former primary
node to the external replication system. The example uses a timeout of 120 seconds:
sap_failover_drain_to_er 120
go
Note
You may also disable the replication from the HADR cluster to a database or all databases on the
external system by executing sap_failover_drain_to_er skip [<dbName>], but doing so causes
the external replicate databases to be out of sync with the HADR cluster.
This is an asynchronous command. Use the sap_status command to check the progress:
sap_status
go
TASKNAME TYPE
VALUE
----------------- ---------------------
------------------------------------------------------------------------------
-----------------------------------------
Status Start Time Wed Sep 07 12:01:00 UTC
2016
Status Elapsed Time
00:00:36
When sap_failover_drain_to_er has successfully finished, all backlogs are drained to the external
system and replication is established from the new primary node to the external replication system.
3. (Optional) If the Fault Manager is configured to restart SAP ASE, SAP Replication Server, and RMA, stop it
before you perform any maintenance activity on the former primary node. From the
<install_directory>/FaultManager directory, issue:
<Fault_Manager_install_dir>/FaultManager/bin/sybdbfm stop
Note
4. When the former primary node is ready to rejoin the replication system, run a similar command in RMA:
sap_host_available PR
go
The system displays the following information when the command has finished successfully.
Replication from the former standby node to the former primary node is established.
5. Verify replication path status.
○ To verify the replication paths within the HADR cluster, run sap_status path in RMA.
sap_status path
go
Note
If the replication data load is low, the synchronization state may not update to Synchronous after
you run the sap_host_available command to establish replication. To refresh its value, run the
sap_send_trace <primary_host_name> command, then re-run the sap_status path
command.
○ To verify the replication to the external system, connect to SAP Replication Server on that site and run
the sysadmin path_check command:
sysadmin path_check
go
<Fault_Manager_install_dir>/FaultManager/sydbfm_<CID>
When the primary SAP ASE server is down or lost, perform an unplanned failover so client applications can
continue to work on the SAP ASE server configured on standby node.
Context
Use the sap_failover command with the unplanned option to perform an unplanned failover from the
primary node to the standby node.
Note
Procedure
1. If the Fault Manager is not configured, in the RMA, run the sap_status path command to verify that the
synchronization state of the primary node is synchronous.
sap_status path
go
A status of synchronous means there is no data loss between the primary and standby SAP ASE servers.
After the failover, client applications can directly connect to the former standby server and resume
business.
A status of asynchronous means there may be some data loss on the standby SAP ASE, in which case,
make sure the data loss is acceptable before you perform an unplanned failover. Otherwise, failover is not
recommended.
2. If the Fault Manager is not configured, enter an sap_failover command with the unplanned option to
initiate the unplanned failover. The example uses a deactivation timeout of 120 seconds:
Be sure to use the unplanned option. Or else the command fails and you get a warning message asking
you to rerun the command with the unplanned option.
Use the sap_status command to check the progress, and proceed only after sap_failover has
finished.
sap_status
go
TASKNAME TYPE
VALUE
---------- ---------------------
------------------------------------------------------------------------------
-----------------------------------------------------------------------
Status Start Time Wed Sep 07 12:16:40 UTC
2016
Failover Hostname
site0
When sap_failover has finished successfully, the SAP ASE server on the former standby node becomes
the new primary server. It is activated only after all transaction backlogs on the former primary node are
drained to the former standby node. Client applications can connect to the former standby node to
continue activities.
3. Run the sap_failover_drain_to_er command to check whether all transaction backlogs are drained
from the former primary node to the external replication system. This example uses a timeout of 120
seconds:
sap_failover_drain_to_er 120
go
Note
You may also disable replication from the HADR cluster to a database or all databases on the external
system by executing sap_failover_drain_to_er skip [<dbName>], but doing so causes the
external replicate databases to be out of sync with the HADR cluster.
sap_status
go
TASKNAME TYPE
VALUE
----------------- ---------------------
------------------------------------------------------------------------------
-----------------------------------------
Status Start Time Wed Sep 07 12:17:12 UTC
2016
Status Elapsed Time
00:00:34
When sap_failover_drain_to_er has finished successfully, all backlogs are drained to the external
replication system and replication is established from the new primary node to the external system.
4. Restore the SAP ASE server on the former primary node. When the node is ready to rejoin the replication
system, run a similar command in RMA:
sap_host_available PR
go
The system displays the following information when the command has finished successfully.
Replication from the former standby node to the former primary node is established.
5. Verify replication path status.
○ To verify the replication paths within the HADR cluster, run the sap_status path command in RMA.
sap_status path
go
Note
If the replication data load is low, the synchronization state may not update to Synchronous after
you run the sap_host_available command to establish replication. To refresh its value, run the
sap_send_trace <primary_host_name> command, then re-run the sap_status path
command.
○ To verify the replication to the external system, connect to SAP Replication Server on that site and run
the sysadmin path_check command:
sysadmin path_check
go
In an external replication system, the SAP Replication Server instance on the standby host receives data from
the primary SAP ASE server, then replicates data to the external SAP Replication Server. Replication to the
external system stops when the standby host goes down. To resume data replication, configure the primary
SAP ASE server to bypass the standby SAP Replication Server, and replicate data to the external SAP
Replication Server directly. After you restore the standby host, reconfigure the primary SAP ASE server to
connect to the standby SAP Replication Server and rematerialize the standby databases.
Note
If you choose to configure the primary SAP ASE server to connect to the external SAP Replication Server
when the standby host is down, you must then rematerialize the standby databases after reconfiguring the
The following diagram shows the data flow after you configure the primary SAP ASE server to connect to the
external SAP Replication Server:
● Automatic – the preferred method. See Automatically Configuring Primary SAP ASE to Replicate Data to
External System When the Standby Host is Down [page 279].
● Manual – use this method only if the automatic method does not work. See Manually Configuring Primary
SAP ASE to Replicate Data to External System When the Standby Host is Down [page 281].
When the standby host goes down, you can configure the primary SAP ASE server to replicate data to the
external SAP Replication Server, then configure it to connect to the standby SAP Replication Server again after
the standby host goes up automatically.
1. Shut down the Fault Manager. See Shutting Down the Fault Manager [page 279].
2. Configure RepAgent to connect to the external system. See Configuring RepAgent to Connect to the
External System [page 280].
3. Reconfigure RepAgent after restoring the standby host. See Reconfiguring RepAgent After Standby Host is
Restored [page 280].
Related Information
Manually Configuring Primary SAP ASE to Replicate Data to External System When the Standby Host is Down
[page 281]
Shut down the Fault Manager so that the primary SAP ASE is not demoted to standby SAP ASE when the
standby host goes down.
Context
Perform these steps to manually shut down Fault Manager when you cannot stop it gracefully.
Procedure
1. Manually kill the heartbeat process on the primary host if you cannot stop it by executing sybdbfm stop:
kill -9 <fm_pid>
When the standby host goes down, configure RepAgent to connect to the external SAP Replication Server to
enable data replication from the primary SAP ASE to the external system.
Procedure
Execute the sap_configure_rat command with the redirect_to_er parameter to redirect the primary
RepAgent to connect to the external SAP Replication Server:
○ <database> | All
Specify <database> to redirect the connection for a specific database, and specify All to redirect the
connection for the whole HADR environment.
○ <ER admin user>, <ER admin password>
Enter the admin user and password of the external SAP Replication Server to allow RMA to connect to it.
After you restore the standby host but before you start any HADR components, reconfigure RepAgent by
configuring the primary SAP ASE server to connect to the standby SAP Replication Server again.
Procedure
Execute the sap_configure_rat command with the redirect_to_ha parameter to redirect the primary
RepAgent to connect to the standby SAP Replication Server:
When the standby host goes down, you can configure the primary SAP ASE server to replicate data to the
external SAP Replication Server, then configure it to connect to the standby SAP Replication Server again after
the standby host goes up manually.
1. Shut down the Fault Manager. See Shutting Down the Fault Manager [page 281].
2. Collect RepAgent configuration parameters. See Collecting RepAgent Configuration Parameters [page
282].
3. Configure RepAgent to connect to the external system. See Configuring RepAgent to Connect to the
External System [page 283].
4. Reconfigure RepAgent after the standby site becomes available. See Reconfiguring RepAgent After
Standby Host is Restored [page 285].
Related Information
Manually Configuring Primary SAP ASE to Replicate Data to External System When the Standby Host is Down
[page 281]
Shut down the Fault Manager so that the primary SAP ASE is not demoted to standby SAP ASE when the
standby host goes down.
Context
Perform these steps to manually shut down Fault Manager when you cannot stop it gracefully.
1. Manually kill the heartbeat process on the primary host if you cannot stop it by executing sybdbfm stop:
2. Kill the Fault Manager process on the host on which the Fault Manager is running:
kill -9 <fm_pid>
Run sp_config_rep_agent on the primary SAP ASE to collect RepAgent configuration parameters, which
are needed to reconfigure RepAgent when the standby host is back up.
Procedure
1. Run sp_config_rep_agent:
sp_config_rep_agent <user_database>
go
When the standby host goes down, configure RepAgent to connect to the external SAP Replication Server to
enable data replication from the primary SAP ASE to the external system.
Procedure
sp_stop_rep_agent <user_database>
go
2. Log into the external SAP Replication Server instance and suspend the log transfer:
3. Log into the primary SAP ASE to configure RepAgent to connect to the external SAP Replication Server:
Note
○ The value of connect dataserver may differ when connecting to the standby host and the
external system. The server name you need to provide here is the server name that was indicated
when creating the connection to the external system. See Configuring Replication Out From an
HADR System [page 246].
○ The rs username is the user you created on the external SAP Replication Server when
establishing the connection from the HADR system to the external replication system. The user
was created for SPQ Agent to connect to the external SAP Replication Server. See Configuring
Replication Out From an HADR System [page 246].
4. Log into the external SAP Replication Server and resume the log transfer:
5. Log into the primary SAP ASE and start the RepAgent:
sp_start_rep_agent <user_database>
go
sp_stop_rep_agent <database_name>
go
sp_config_rep_agent <database_name> 'disable'
go
By default, SAP Replication Server ensures that there is no data loss when you switch the primary SAP ASE to
replicate data to the external SAP Replication Server.
In stream replication, when simple persistent queue (SPQ) readers Capture and SPQ Agent read data from the
SPQ, they mark their truncation points in different spots in the SPQ due to different data processing speeds.
SAP Replication Server sends the truncation point (TP) to RepAgent as its secondary truncation point (STP)
according to two different mechanisms:
● Mechanism A sends the truncation point of the faster SPQ reader to RepAgent as its secondary truncation
point:
In mechanism A, SAP Replication Server sends the truncation point of the faster SPQ reader (TP2) to
RepAgent as the secondary truncation point. RepAgent truncates all data before TP2 in the primary SAP
ASE log. In this situation, if replication to the external system is slower than the standby host and the
standby host is down, switching the primary SAP ASE to connect to the external SAP Replication Server
causes data loss between TP1 and TP2.
● Mechanism B sends the truncation point of the slower SPQ to RepAgent as its secondary truncation point:
SAP Replication Server uses mechanism B (sending the truncation point of the slower SPQ reader, that is
TP1, to RepAgent as its secondary truncation point) by default to make sure that no data is lost after
switching the connection of RepAgent from the standby to the external SAP Replication Server.
In this mechanism, RepAgent always truncates data before TP1 in the primary SAP ASE log. Even if the
external system replicates data slower than the standby host, RepAgent does not truncate data that has
not been replicated to the external SAP Replication Server.
When the trace is turned on, SAP Replication Server uses mechanism A and sends the truncation point of the
faster SPQ reader to RepAgent as its secondary truncation point. The secondary truncation point moves faster
and the logs are truncated faster in the primary SAP ASE. However, in this situation, there may be data loss
when the standby host is down and you switch the primary SAP ASE server to connect to the external SAP
Replication Server. To prevent data loss, turn off the trace CI_MOVETP_BY_GTP_OFF:
After you restore the standby host, configure the primary SAP ASE server to connect to the standby SAP
Replication Server again, by reconfiguring RepAgent before starting any HADR components.
Procedure
1. Log into the primary SAP ASE and stop RepAgent on the primary database:
sp_stop_rep_agent <user_database>
go
2. Log into the external SAP Replication Server to suspend the log transfer:
3. Log into the primary SAP ASE and reconfigure RepAgent on the primary database to connect to the
standby SAP Replication Server:
The values you provide here for the following configuration parameters should be the values you have
collected that are used for the connection between the primary SAP ASE server and the standby SAP
Replication Server. See Collecting RepAgent Configuration Parameters [page 282]:
○ rs servername
○ rs username
○ rs password
○ connect dataserver
9. Log into the primary SAP ASE and start the RepAgent:
sp_start_rep_agent <user_database>
go
10. Rematerialize standby SAP ASE databases that are replicated to the external system. See Rematerializing
Databases for External Replication [page 257].
11. (Optional) Rematerialize the databases that do not participate in external replication, such as master and
CID, if you disable their data replication when the standby site is down. See Rematerializing Databases for
External Replication [page 257].
Tuning the performance of an external Replication Server involves configuring two SPQ Agent parameters,
ci_pool_size and ci_package_size.
● ci_pool_size – Specifies the size of Component Interface (CI) buffer pool. For data replication from the
simple persistent queue (SPQ) to the SPQ Agent, use the alter connection command to change the
value of ci_pool_size parameter. For example:
alter connection ds.db set ci_pool_size to '100'
● ci_package_size – Specifies the size of a CI package. Each CI package in CI buffer pool shares the same
size configured by ci_package_size parameter. Use the same value for ci_package_size as you used
in the Rep Agent.
This section lists the recommended hardware, operating system, and networking configurations for an external
replication system.
● System Hardware
○ Simple Persistent Queue (SPQ) must be on a dedicated solid-state drive (SSD).
○ Use 8 to 32 GB of memory and 4 to 12 cores, depending on the size on the host for HADR components.
● Operating System
○ Tune file system for SPQ and Replication Server partitions.
○ For SPQ, consider using PCIe solid-state drive (SSD) technology over host bus adapters (HBAs).
Depending on sizing and volume, inbound queue (IBQ) and outbound queue (OBQ) may share PCIe
SSD with SPQ.
○ Separate log and data. The file systems hosting logs and data should be on different mount points and
different volumes.
● Network Configurations
○ You should use 10 gigabit Ethernet, or a separate subnet and NICs for high-availability components.
○ Set network TCP send and receive buffer.
○ Tune NIC interface for queue size.
There are a number of commands you can run to confirm that your HADR system is running correctly.
● Log into the primary and companion servers and use the hadr_mode, hadr_state, and the asehostname
functions to confirm that:
○ The HADR mode is "primary"
○ The HADR state is "active" on the primary server
○ The HADR mode is "standby"
○ The HADR state is "inactive" on the companion server
○ The host names are correct
This example confirms that the HADR mode is "Primary" and the HADR state is "Active" on the primary
site, mo-ae3a62265:
This example confirms that the HADR mode is "Standby" and the HADR state is "Inactive" on the
companion site, mo-4a63cdeba:
● Connect to RMA on one of the hosts (the primary, mo-ae3a62265, in this example) and execute the
sap_status path command to confirm that the synchronization state shows "Synchronous" and the
paths are "Active" for all replicated databases (see the bold text):
● Insert an rs_ticket on the primary server, and verify that it appears on the companion server to confirm
that replication is functioning correctly:
use IND
go
rs_ticket "Testing_HADR_Configuration"
go
(return status = 0)
On the companion server, query the rs_ticket_history table in the same database to find the inserted
ticket.
● Verify the Fault Manager is running correctly by issuing sybdbfm status from the installation directory
on the host running the Fault Manager to confirm that the Fault Manager is monitoring the HADR
configuration, and that the command displays the replication status as SYNC_OK. For example:
fault manager running, pid = 34534, fault manager overall status = OK,
currently executing in mode PAUSING
*** sanity check report (8720)***.
node 1: server mo-ae3a62265.mo.sap.corp, site hasite0.db host status: OK.db
status OK hadr status PRIMARY.
node 2: server mo-4a63cdeba.mo.sap.corp, site hasite1.db host status: OK.db
status OK hadr status STANDBY.
replication status: SYNC_OK.
Also, verify that the HADR tab in the SAP ASE Cockpit shows the Fault Manager status under Service
Components as UP.
Administering the HADR system includes adding users and databases, materializing and rematerializing
databases, performing planned failover, recovering from unplanned failover, and so on.
Many of the administrative tasks require that you log into SAP ASE, Replication Server, and RMA at the isql
command line.
Set the SAP ASE environment variables by sourcing the $SYBASE/SYBASE.csh (for SAP ASE) and
$SYBASE/DM/SYBASE.csh files (for Replication Server and RMA). You can view the Replication Server port
numbers in the $SYBASE/DM/interfaces file.
For example:
For example:
For example:
See the SAP ASE Utility Guide for more information about isql.
Create the SAP ASE maintenance user to apply activity to the target system (this login is created automatically
when you use setuphadr utility to set up HADR).
Context
The maintenance user requires a unique SAP ASE login. Do not use an existing SAP ASE user as the
maintenance user.
Note
Protect the Replication Server maintenance user's password. See the Replication Server Administration
Guide > Manage Database Connections > Manage the Maintenance User.
Replication Server applies changes to the standby database using the unique maintenance user login.
Replication Server uses the maintenance login to log into SAP ASE servers in the HADR system.
Context
The maintenance user requires a unique SAP ASE login. Do not use an existing SAP ASE user as the
maintenance user.
Note
Protect the Replication Server maintenance user's password. See the Replication Server Administration
Guide > Manage Database Connections > Manage the Maintenance User.
Replication Server applies changes to the standby database using the unique maintenance user login.
To add the maintenance user, perform these steps on both the primary and standby server:
● Create the maintenance login on SAP ASE servers in the HADR system.
● The maintenance login name is <SID_name>_maint. For example, if the SID of the HADR system is SAP1,
the name of the maintenance user is SAP1_maint.
● The maintenance login SUID is the same on all SAP ASE instances (the SAP installer sets the maintenance
SUID to 1001 by default).
1. Create the maintenance user. This example creates a maintenance user named D01_maint:
use master
go
create login D01_maint with password Sybase123
go
2. Grant the replication_role to the maintenance user, enabling it to replicate the truncate table
command:
3. Use sp_addalias to alias the maintenance user to the database owner on the master and user
databases, allowing this user to update tables that use IDENTITY columns:
4. Grant the sa_role to the maintenance user so it can replicate insert, update, and delete operations on
all tables:
6. Grant set session authorization permissions to the maintenance user, allowing it to become
another user when applying DDL commands to the replicate database. Grant the permission to a user or a
role, not a login:
The HADR system maintains a list of databases that are replicated (issue the sap_set RMA command to see
this list). However, user databases created by a create database command are not automatically added to
the HADR participating database list.
● Create the new database on both the primary and standby servers with similar physical size
configurations.
Note
Creating the database on the standby server with the for_load parameter skips the page-clearing
step and reduces operation time.
● The database administrator needs proper permissions to create and add the database.
1. Issue sap_set from the RMA command line to determine if the database is already added to the HADR
system. If the sap_set output lists the database you are adding in the participating_databases line,
this database is already included in the HADR replication system. For example, this indicates that the
pubs2 database is not included in the HADR replication system (see the bold text):
sap_set
go
PROPERTY VALUE
----------------------------------------- ------------------------
maintenance_user DR_maint
sap_sid AS1
installation_mode nonBS
participating_databases [master,AS1]
connection_timeout 5
connection_alloc_once true
. . .
2. Create the new database on the primary and standby servers, making sure the databases use appropriate
sizing for the data and log devices. For example, if you create the pubs2 database on the primary
companion, create it on the secondary companion as well.
3. Add the DR_maint login as the database owner (dbo) alias on the primary and standby database for the
newly created database. For example, on the newly created pubs2 database:
use pubs2
go
sp_addalias DR_maint,dbo
go
4. Issue sap_status path to verify that the paths for all database are active. The output of sap_status
path is very long. The lines that indicates the active paths should look similar to (see the bold text):
sap_status path
go
PATH NAME VALUE
INFO
5. Issue sap_update_replication from the RMA to add the newly created databases to the participating
database list on the primary and standby servers. For example:
This command returns and starts an asynchronous job to prepare the new database to join HADR
replication. It may take a long time to complete. Issue the sap_status RMA command until it includes the
line below in bold, which indicates the sap_update_replication command is successful:
sap_status
go
TASKNAME TYPE
VALUE
----------------- ---------------------
--------------------------------------------------------------------------
Status Start Time
Thu Nov 19 21:01:21 UTC 2015
Status Elapsed Time
00:02:05
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Completed
UpdateReplication Short Description
Update configuration for a currently replicating site.
UpdateReplication Long Description
Update replication request to add database 'pubs2' completed successfully
UpdateReplication Current Task Number
3
UpdateReplication Total Number of Tasks
3
UpdateReplication Task Start
Thu Nov 19 21:01:21 UTC 2015
UpdateReplication Task End
Thu Nov 19 21:03:26 UTC 2015
UpdateReplication Hostname
SFMACHINE1
6. Materialize the newly added database so it synchronizes with the corresponding database on the standby
server. This example uses automatic materialization:
------------------------------------------------------------------------------
------------
Materialize Start Time
Thu Nov 19 21:27:24 UTC 2015
Materialize Elapsed Time
00:00:00
DRExecutorImpl Task Name
Materialize
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Materialize database
DRExecutorImpl Long Description
Started task 'Materialize' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine when task
'Materi
alize' is complete.
Materialize Task Name
Materialize
Materialize Task State
Running
Materialize Short Description
Materialize database
Materialize Hostname
SFMACHINE1
This command may take a long time to complete. Issue the sap_status command until it includes the line
below in bold, which indicates the sap_materialize command is successful:
sap_status
go
TASKNAME TYPE
VALUE
----------------- ---------------------
--------------------------------------------------------------------------
. . .
Materialize Long Description
Completed automatic materialization of database 'pubs2' from source 'SF
HADR1' to target 'SJHADR2'.
. . .
Issue sap_status path to verify the HADR system is healthy and the pubs2 database is added to the
HADR system. The output of sap_status path is very long. The lines that indicates the active paths
should look similar to the lines in bold below (which indicate that the pubs2 database is active):
sap_status path
go
PATH NAME VALUE
INFO
---------------------- ------------------------- ------------------------
-----------------------------------------------------------------------
-------------
Start Time 2015-11-19 20:21:41.185
Time command started executing.
Elapsed Time 00:00:00
Command execution time.
. . .
You can load data from an external dump using the RMA command line or from the SAP ASE Cockpit. Typically,
external dumps are used to refresh local SAP ASE databases with the latest data dumped from another
database system, or to migrate your existing SAP ASE database into an SAP ASE HADR-enabled database
system.
1. Issue sap_status path to verify the HADR system is healthy and the pubs2 database is added to the
HADR system. The output of sap_status path is very long. The lines that indicates the active paths
should look similar to the lines in bold below (which indicate that the pubs2 database is active):
sap_status path
go
PATH NAME VALUE
INFO
---------------------- ------------------------- ------------------------
-----------------------------------------------------------------------
-------------
Start Time 2015-11-19 20:21:41.185
Time command started executing.
Elapsed Time 00:00:00
Command execution time.
. . .
SFHADR1.SJHADR2.pubs2 State Active
Path is active and replication can occur.
2. Log into the RMA on the primary host and disable replication for the database (this disables RMA for that
database):
3. Load the database dump by logging into the SAP ASE running on the primary site and executing a
command similar to:
4. Bring the database online by logging into the SAP ASE running on the primary site and executing a
command similar to:
5. If necessary, add the DR_maint user as database owner for the database by logging into the SAP ASE
running on the primary site and executing a command similar to:
use pubs2
go
sp_addalias DR_maint,dbo
go
6. Enable replication for the database by logging into the RMA running on the primary site and executing an
RMA command similar to:
7. Issue sap_status path to verify that the paths for the database are active. The output of sap_status
path is very long. The lines that indicates the active paths should look similar to (see the bold text):
sap_status path
go
PATH NAME VALUE
INFO
-----------------------------------------------------------------------
-------------
Start Time 2015-11-19 20:21:41.185
Time command started
executing.
8. Materialize the databases from the primary site to the standby site using either the automatic or manual
method. This example uses the automatic method:
TYPE
VALUE
---------------- -----------------------------
------------------------------------------------------------------------------
------------
Materialize Start Time Wed Nov 19 20:31:13 EST
2015
Materialize Elapsed Time
00:00:00
sap_status
go
TASKNAME TYPE
VALUE
----------------- ---------------------
--------------------------------------------------------------------------
. . .
Materialize Long Description
Completed automatic materialization of database 'pubs2' from source 'SF
HADR1' to target 'SJHADR2'.
. . .
UpdateReplication Hostname
SFMACHINE1
9. Issue sap_status path to verify that the paths for the database are active. The output of sap_status
path is very long. The lines that indicates the active paths should look similar to the following (see the bold
text), with the replication path from SFHADR1 to SJHADR2 for the pubs2 database is in Active state:
sap_status path
go
PATH NAME VALUE
INFO
-----------------------------------------------------------------------
-------------
Start Time 2015-11-19 20:21:41.185
Time command started
executing.
1. Log into RMA on the primary site and execute this RMA command:
2. Load the database. See Manage SAP ASE > Backup and Restore > Restoring (Loading) a Database >
Generating a Database Load Sequence in the SAP ASE Cockpit documentation.
3. Enable replication by logging into RMA on the primary site and executing this RMA command:
4. Rematerialize the database. See Manage SAP ASE > Always-On (HADR) Option > Rematerialize Databases
in the SAP ASE Cockpit documentation.
Note
Materializing a database is a resource-intensive process. Do not run sap_materialize on more than one
database at a time.
Log into the RMA and use the sap_materialize command to materialize the databases from the primary site
to the companion site using the automatic or manual method:
● Automatic – Log into RMA on the primary site and issue this RMA command:
For example, to materialize the pubs2 database from the SFHADR1 site to the SJHADR2, issue:
For example:
This command is asynchronous. Run this RMA command until it shows that the sap_materialize
command succeeded:
sap_status
go
TASKNAME TYPE
VALUE
----------- -----------------
---------------------------------------------------------------------------
---------------------------------------------------------------------------
---------------------------------------------------------------------------
-------------------
Status Start Time Wed Sep 28 21:56:47 EDT
2016
Materialize Hostname
SFMACHINE1
(9 rows affected)
The RMA prompts you to use the label, RMA_DUMP_LABEL, to distinguish the database dump that you
issue on the primary server from other scheduled system dumps. Replication resumes when the dump
marker with RMA_DUMP_LABEL arrives at SAP Replication Server. When you materialize the replication
database automatically using sap_materialize auto, RMA dumps the database internally with the
specified label. You can modify the label in the /DM/RMA-/instances/AgentContainer/config/
bootstrap.prop or /DM/RMA-/config/bootstrap.prop files. For more information, see
sap_materialize [page 476].
2. Dump the database from the primary server using RMA_DUMP_LABEL and load it onto the companion
server.
1. Use the dump database command to manually dump the database.
For example,
Note
Run only one dump database command between each sap_materialize start and
sap_materialize finish commands you perform. Any modifications you make to the master
database between the time of making the dump and issuing sap_materialization finish are
3. Log into the RMA running on the primary site and issue this RMA command:
For example:
This command is asynchronous. Run this RMA command until it shows that the sap_materialized
command succeeded:
sap_status task
Rematerializing Databases
Rematerialization reactivates the replication paths that are inactive due to replication problems (such as row
count mismatch). Replication paths are critical for keeping the primary and standby databases in sync. Inactive
replication paths lead to data inconsistency between the primary and standby HADR databases.
Rematerialization resolves these inconsistencies.
To rematerialize a database:
● The database administrator has the sa role and the DR_admin account (to log into RMA).
● The primary and standby database are online and accessible.
● The primary and standby Replication Servers are online.
The steps below describe rematerializing a database from the RMA command line. You can also rematerialize
databases from the SAP ASE Cockpit. See Manage SAP ASE > Always-On (HADR) Option > Rematerializing
Databases in the SAP ASE Cockpit documentation for information.
1. Disable replication to the database to be rematerialized. You can perform this task with replication to the
standby database in a suspended state. From RMA issue:
3. Issue sap_status path to verify the HADR system is healthy and the pubs2 database is added to the
HADR system. The output of sap_status path is very long. The lines that indicates the active paths
should look similar to the lines in bold below (which indicate that the pubs2 database is defined):
sap_status path
go
PATH NAME VALUE
INFO
---------------------- ------------------------- ------------------------
-----------------------------------------------------------------------
-------------
Start Time 2015-11-19 20:21:41.185
Time command started executing.
Elapsed Time 00:00:00
Command execution time.
. . .
SFHADR1.SJHADR2.pubs2 State Defined
Path is defined and ready for replication.
4. Materialize the newly added database so it synchronizes with the corresponding database on the standby
server. This example uses automatic materialization:
ASEDBDumpAndLoad Hostname
SFHADR1
This command may take a long time to complete. Issue the sap_status command until it includes the line
below in bold, which indicates that the sap_materialize command succeeded:
sap_status task
go
TASKNAME TYPE
VALUE
----------------- ---------------------
--------------------------------------------------------------------------
. . .
Materialize Long Description
Completed automatic materialization of database 'pubs2' from source 'SF
HADR1' to target 'SJHADR2'.
5. Issue sap_status path to verify the HADR system is healthy and the pubs2 database is added to the
HADR system. The output of sap_status path is very long. The lines that indicates the active paths
should look similar to the lines in bold below (which indicate that the pubs2 database is active):
sap_status path
go
PATH NAME VALUE
INFO
---------------------- ------------------------- ------------------------
-----------------------------------------------------------------------
-------------
Start Time 2015-11-19 20:21:41.185
Time command started executing.
Elapsed Time 00:00:00
Command execution time.
. . .
SFHADR1.SJHADR2.pubs2 State Active
Path is active and replication can occur.
The SAP Adaptive Server Enterprise Cockpit (SAP ASE Cockpit) is a Web-based tool for monitoring the status
and availability of SAP ASE servers.
SAP ASE Cockpit provides availability monitoring, historical performance monitoring, and administration
capabilities in a scalable Web application that are integrated with management modules for other SAP
products, including the always-on option. The cockpit offers management of alerts that provide state- and
threshold-based notifications about availability and performance in real-time, and intelligent tools for spotting
performance and usage trends, all via a thin-client, rich Internet application delivered through your Web
browser.
In an HADR environment, the cockpit offers a visual display of the status for SAP ASE and Replication Server,
the modes in which they are currently running, and how efficient the connections are. This section provides an
overview of the MONITOR, EXPLORING, and ALERTS tabs.
Along with the usual user logins, you can log into the SAP ASE Cockpit as the sa or the DR_admin. The
DR_admin user is more restricted in its scope than sa, and is sufficient for most tasks in the SAP ASE Cockpit.
The SAP ASE Cockpit includes extensive online help. To view an online version, select Help at the top of the
screen, or go to http://help.sap.com/adaptive-server-enterprise.
The SAP ASE Cockpit displays the primary and standby machines graphically as boxes, with a red, gray, or
green line connecting the boxes. The server on the left is always the SAP ASE server on which you are focusing.
The primary server is in light yellow color, and the standby server is in light gray color. A green line indicates
that the systems are successfully connected. The site name, HADR status, synchronization mode, and
synchronization state are indicated in the boxes. The logical host name, host name, and port number on which
the server is run are shown in the inner, colored box.
A red line indicates that replication has stopped. A gray line indicates that replication is suspended.
Click the center button to display and close the Replication Paths table, which includes information about:
Note
The status of these columns is updated during the HADR statistics collection cycle. Mouse over the
column headings to determine when they are updated. The column headings and the tool tip indicate
the collection time and interval.
The Service Component Status and the Replication Paths Status screens summarize the status of all
components in the Service Component panel and the summary status in the middle Replication Paths table,
respectively. The screens use the following labels and icons for different health conditions:
● Service Component Status – The green icon with an “Active” label indicates that there are no warnings or
errors in any of the service components. The yellow icon with “Warning” label indicates that warnings exist
Below the Replication Paths table is a table with the following tabs:
● Service Components – Shows the status of the local and remote RMA, Fault Manager, and the local and
remote Replication Server. RMA is considered active if either the local or remote RMA is running.
● Fault Manager Message – Displays up to 100 Fault Manager messages, with the most recent on top. Click
the header tab to change the default sort order. High severity messages are shown in red, low severity
messages are shown in black, and Recovered messages are shown in green. When SAP ASE receives a new
Fault Manager message, it automatically switches to the Service Components screen to display the
message.
● Log Records – Displays information based on the % of Free Transaction Log, Log Records
Scanned, and Log Records Processed KPIs.
● Throughput – Displays information based on the level of throughput.
● Backlog – Displays information based on the ASE Log Backlog, Primary RS Backlog, and Remote
RS Backlog KPIs.
● Latency – Displays information based on the PDA to EXEC Latency, EXEC to DIST Latency, DIST
to DSI Latency, and DSI to RDB Latency KPIs.
SAP ASE Cockpit collects these alerts for each RMA agent or server:
SAP ASE Cockpit collects these alerts for each replication path:
Select ASE Servers <server_name> Manage Disaster Recovery from the drop-down list to view the
options for administering the HADR System.
Viewing Alerts in the HADR System with the SAP ASE Cockpit
The Alerts tab is a visual display of current and previous alerts issued against the system.
From this screen, you can also configure the SAP ASE Cockpit to notify you of incoming alerts, configure how
often the SAP ASE Cockpit scans for alerts, and set the alert thresholds.
Use the RMA command to suspend, resume, enable, and disable databases.
See sap_suspend_replication [page 541] and sap_resume_replication [page 489]. However, these commands
only work for planned suspend and resume activities (for example, when using the sap_configure_rs
command). If a DSI connection suspends due to an error, you may need to use the native Replication Server
resume because sap_resume does not support system transactions (DDL) and skip capabilities.
Disabling replication in an HADR environment requires that RMA stops the Replication Agent and disables the
secondary truncation point.
Issue this command to disable replication from the active Replication Server to the companion server inside
HADR system:
sap_disable_replication <primary_logical_host_name>,
<companion_logical_host_name>, [<dbname>]
This command stops the capture in the active Replication Server and disables its truncation point in SPQ so
that it can be truncated properly. However, this command may result in data loss, and you must rematerialize
the standby database before capture restarts.
sap_enable_replication <primary_logical_host_name>,
<companion_logical_host_name>, <dbname>
Indicates the database name if you want to enable the replication only for a specific database. Removes the
database name if you want to enable the replication for the whole server. See sap_enable_replication [page
462]
See Adding Databases from the Command Line After Installation [page 294], or see Manage SAP ASE >
Always-On (HADR) Option > Suspend Replication and Resume Replication in the SAP ASE Cockpit
documentation for information about performing this task in the SAP ASE Cockpit.
The components in the HADR system must be shutdown and started in a ordered sequence.
Note
Shutting down the standby server may require you to restart the data server interface (DSI) thread in
Replication Server.
1. Fault Manager. On the host running the Fault Manager, source the <installation_directory>/
SYBASE.csh (SYBASE.sh on the Korn shell or SYBASE.bat on Windows) file, and issue:
○ (UNIX) – <Fault_Manager_install_dir>/FaultManager/bin/sybdbfm stop
○ (Windows) – <Fault_Manager_install_dir>\FaultManager\bin\sybdbfm.exe stop
2. SAP ASE Cockpit on both hosts. If the cockpit is running, issue:
○ In the foreground – At the cockpit prompt, execute:
shutdown
○ In the background:
○ (UNIX) – $SYBASE/COCKPIT-4/bin/cockpit.sh --stop
○ (Windows) – net stop "Cockpit 4.0"
3. Backup Server on both hosts. Log into SAP ASE on both hosts using isql and issue :
shutdown SYB_BACKUP
4. Primary SAP ASE. Log into SAP ASE using isql and issue:
sp_hadr_admin deactivate,'30','<timeout_period>'
If the server can't be deactivated or has undrained transaction logs, gracefully shut it down:
5. Active (companion site) Replication Server. Log into Replication Server using isql and issue:
shutdown
go
6. RMA on both hosts. Log into the RMA using isql and issue:
shutdown
go
7. Standby SAP ASE. Log into SAP ASE using isql and issue:
shutdown
go
8. Inactive (primary site) Replication Server. Log into Replication Server using isql and issue:
shutdown
go
Start up the HADR applications in this sequence (all commands are issued from the command line):
cd $SYBASE/$SYBASE_ASE/install/
startserver -f RUN_<server_name>
For example:
cd $SYBASE/$SYBASE_ASE/install/
startserver -f RUN_SJSAP2
○ (Windows) – %SYBASE%\%SYBASE_ASE%\install\RUN_<server_name>.bat
For example:
%SYBASE%\%SYBASE_ASE%\install\RUN_SJSAP2.bat
Note
cd $SYBASE/DM/<cid>_REP_<logical_site_name>/
nohup ./RUN_<cid>_REP_<logical_site_name>.sh &
For example:
cd $SYBASE/DM/AS1_REP_SJHADR2/
○ (Windows) – %SYBASE%\DM\cid_REP_logical_site_name\<cid>_REP_<logical_site_name>
\RUN_<cid>_REP_<logical_site_name>.bat
For example:
%SYBASE%\DM\AS1_REP_SJHADR2\RUN_AS1_REP_SJHADR2.bat
Note
cd $SYBASE/$SYBASE_ASE/install/
startserver -f RUN_<server_name>
For example:
cd $SYBASE/$SYBASE_ASE/install/
startserver -f RUN_SFSAP1
○ (Windows) – %SYBASE%\%SYBASE_ASE%\install\RUN_<server_name>.bat
For example:
%SYBASE%\%SYBASE_ASE%\install\RUN_SFSAP1.bat
Note
$SYBASE/$SYBASE_ASE/install/RUN_<server_name>
cd $SYBASE/$SYBASE_ASE/install/
startserver -f RUN_<server_name>_BS
For example:
cd $SYBASE/$SYBASE_ASE/install/
startserver -f RUN_SFSAP1_BS
○ (Windows) – %SYBASE%\%SYBASE_ASE%\install\RUN_<Backup_server_name>.bat
For example:
%SYBASE%\%SYBASE_ASE%\install\RUN_SFSAP1_BS.bat
cd $SYBASE/DM/<cid>_REP_<logical_site_name>/
nohup ./RUN_<cid>_REP_<logical_site_name>.sh &
cd
$SYBASE/DM/AS1_REP_SJHADR2/
nohup ./RUN_AS1_REP_SFHADR1.sh &
○ (Windows) – %SYBASE%\DM\cid_REP_logical_site_name\<cid>_REP_<logical_site_name>
\RUN_<cid>_REP_<logical_site_name>.bat
For example:
%SYBASE%\DM\AS1_REP_SJHADR2\RUN_AS1_REP_SFHADR1.bat
Note
cd $SYBASE/$SYBASE_ASE/bin/
nohup ./rma &
○ (Windows) – Start the RMA Windows service by either method, where <cluster_ID> is the ID of the
cluster:
○ Start Sybase DR Agent - <cluster_ID> from the Services panel, or
○ Issue this command:
cd $SYBASE/COCKPIT-4/bin/
nohup ./cockpit.sh &
○ (Windows) –%SYBASE%\COCKPIT-4\bin\cockpit.bat
○ In the background – At the UNIX command line. From the Bourne shell (sh) or Bash, issue:
cd <Fault_Manager_install_dir>
nohup ./sybdbfm_<CID> &
For example:
cd /work/FaultManager/
nohup ./sybdbfm_AS1 &
○ (Windows) – <Fault_Manager_install_dir>\FaultManager\sybdbfm_<CID>.bat
cd /work/FaultManager/
nohup ./sybdbfm_AS1 &
https:\\SFMACHINE1:4283\cockpit\#
The database administrator (DBA) must make a conscious decision about when to fail over, how to handle any
lost transactions, and what happens to these transactions when the original primary site comes back online.
To ensure that failover to the standby site can proceed when the primary site SAP ASE is not available, use the
unplanned option with the sap_failover command:
Where:
● deactivate_timeout – is the number of seconds sap_failover waits while the failover process drains
the transaction log and SAP Replication Server queues, and waits for all in-flight data to finish replicating. If
the timeout is reached, the process terminates. You cannot specify the force option in an unplanned
failover because the primary SAP ASE is not available and cannot be deactivated.
● drain_timeout – (optional) is the number of seconds the process waits while draining the transaction
log from the primary SAP ASE to Replication Server. If the timeout is reached, the process terminates. If
not set, the timeout equals to the value of deactivate_timeout by default.
Once sap_failover completes successfully, applications can operate with the former standby database
which now runs as the new primary database.
When an unplanned failover occurs, the former standby SAP ASE becomes the new primary SAP ASE.
However, depending on the synchronization state, the former standby SAP ASE may or may not contain the
same data the former primary SAP ASE contained. If, at the time of failover, the environment is in the:
● Asynchronous replication state, or if the primary SAP Replication Server also failed during the event – the
former primary SAP ASE data is lost and you must rematerialize the former primary SAP ASE to match the
content of the new primary (former standby) SAP ASE.
● Synchronous replication state – the primary and standby SAP ASE contain the same data and you need
not rematerialize the former primary SAP ASE.
To determine the replication synchronization mode and synchronization state of the replication path, execute
the sap_status path command.
Note
During an unplanned failover, it is important to know the synchronization state of the environment at the
time the failover is performed. It is the state and not the requested mode of synchronization that
Procedure
1. Verify the replication synchronization state of the replication path at the time failure occurred is the
synchronous replication state.
Enter:
sap_status path
go
If the replication path is in a synchronous or near-synchronous replication state, you see an output similar
to this:
If the Synchronization State is Asynchronous and a failover occurs, there is a risk of data loss because not
all data from the primary is guaranteed to have reached the standby SAP ASE. To guarantee the databases
are synchronized when the primary SAP ASE returns to service, rematerialize the primary SAP ASE.
If the Synchronization State is Synchronous, all data from the primary SAP ASE should have been applied
to the standby SAP ASE. Rematerialization is not required.
2. Execute sap_failover with the unplanned option:
Note
The SAP ASE running on the standby site is now the primary companion, and applications can connect
to it.
sap_host_available <primary_logical_host_name>
go
If the environment was in the asynchronous replication state at the time of the failover or the primary SAP ASE
terminated, there may be data loss, so you may need to rematerialize the former primary SAP ASE as the
primary and standby SAP ASE do not contain the same data. Make a careful, planned decision for failover
because there is potential for data loss.
Procedure
1. Verify the replication synchronization state of the replication path at the time failure occurred is the
asynchronous replication state.
Enter:
sap_status path
go
If the replication path is in the asynchronous replication state, you see an output similar to this:
If the Synchronization State is Synchronous or Near Synchronous, all data from the primary SAP ASE
should have been applied to the standby SAP ASE. Rematerialization is not required.
If the Synchronization State is Asynchronous and a failover occurs, there is a risk of data loss because not
all data from the primary is guaranteed to have reached the standby SAP ASE. To guarantee the databases
are synchronized when the primary SAP ASE returns to service, you must rematerialize the primary SAP
ASE.
2. Execute sap_failover with the unplanned option:
3. Wait for both the former primary SAP ASE and the former primary SAP Replication Server to start and
become available and ensure that all servers in the HADR system are available for replication.
4. Reconfigure the former primary SAP ASE database as the new standby for the activity occurring at the
former standby SAP ASE database site:
sap_host_available <primary_logical_host_name>
go
5. Stop replication from the former standby SAP ASE (current primary) to the former primary SAP ASE
(current standby):
sap_disable_replication <standby_logical_host_name>
go
6. Reset replication from the former standby SAP ASE (current primary) to the former primary SAP ASE
(current standby):
sap_enable_replication <standby_logical_host_name>
go
This is necessary to prepare for rematerialization of the former primary SAP ASE.
7. Rematerialize the databases from the current primary site to the former primary site.
Ensure that you materialize from the current primary SAP ASE, which you defined earlier during HADR
system setup as <standby_logical_host_name>, to the former primary SAP ASE, which you defined
earlier during HADR system setup as <primary_logical_host_name>.
If SAP Replication Server is unavailable during an SAP ASE startup after an unplanned failover, use SAP ASE
commands to recover a database that is enabled for synchronous replication, and make it accessible online.
Context
If the replication mode is synchronous for the primary data server and SAP Replication Server is unavailable
during SAP ASE startup after an unplanned failover, SAP ASE cannot recover the original primary data server
and make it assume the role of a standby data server, since SAP ASE cannot connect to SAP Replication Server
to obtain information about the last transaction that arrived at SAP Replication Server. For example, if the
database name is D01 and <dbid> represents the database ID, in the SAP ASE error log you see:
Error: 9696, Severity: 17, State: 1
Recovery failed to connect to the SAP Replication Server to get the last oqid for
database 'D01'.
Database 'D01' (dbid <dbid>): Recovery failed.
Check the ASE errorlog for further information as to the cause.
1. Check the SAP ASE error log to see if the latest attempt to connect to the SAP Replication Server failed.
2. Verify that the original primary database has not been recovered.
For example, if the database name is D01, log in to isql and enter:
use D01
go
4. In SAP ASE, enable trace flag 3604 to log all events and any errors that occur during database recovery:
dbcc traceon(3604)
go
dbcc dbrecover(D01)
go
The recovery is successful and the database is accessible online if you see the events logged by the trace
flag ending with:
...
Recovery complete.
Database 'D01' is now online.
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
6. Verify that the database is recovered and can be accessed:
use D01
go
Context
When you use the --recover-syncrep-no-connect option for the SAP ASE dataserver executable, SAP
ASE starts and tries to connect to the SAP Replication Server during recovery. If the connection attempts to
Procedure
...
dataserver --recover-syncrep-no-connect
...
After an unplanned failover for a database that is enabled for synchronous replication that participates in a
multidatabase transaction, the recovery process may not apply changes to all the databases involved in the
multidatabase transaction.
A multidatabase transaction, also known as a cross-database transaction, is a single transaction that affects
tables in different databases where the coordinating database is the database where the transaction started,
and the subordinate databases are the other databases affected by the transaction.
During recovery after an unplanned failover in an HADR system where the replication synchronization mode is
synchronous, only the primary SAP ASE database, which is configured for synchronous replication, rolls back
transactions that have not been stored in the simple persistent queue (SPQ) of the SAP Replication Server.
However, if there is a multidatabase transaction, only the database that is enabled for synchronous replication
rolls back the transaction for a multidatabase transaction. The other databases participating in a
multidatabase transaction do not roll back the transaction. Therefore after recovery, the changes due to the
multidatabase transaction will not have been applied across all the participating databases.
During recovery after an unplanned failover, SAP Replication Server assumes the coordinator role that was
previously performed by the coordinating database where the multidatabase transaction started.
After recovery from an unplanned failover where multidatabase transactions have been applied, the status of
the multidatabase transactions depends on whether the replication mode is synchronous for the coordinating
database or one of the subordinate databases. See the SAP ASE error log for the status of the transaction and
decide if you want to manually apply or roll back changes in one of the databases.
● If the coordinating database replication synchronization mode is not synchronous and the subordinate
database replication synchronization mode is:
○ Synchronous – SAP ASE may roll back the changes in the subordinate database during recovery if the
changes are newer than the last transaction received by SAP Replication Server and written to SPQ.
You see this information in the SAP ASE error log:
Synchronously replicated multidatabase subordinate transaction (<page>,
<row>)
Only one server in the HADR group should perform client transactions at any one time. If more than one server
assumes the role of the primary server, the databases on the HADR servers can no longer be synchronized, and
the system enters a "split-brain" situation.
The HADR system provides a check against this, which is performed either at start-up if the SAP ASE
configuration file instructs the server to start as a primary, or when you use sp_hadr_admin primary to
manually promote a standby server to the primary server.
The check connects to and queries each configured HADR member. If a remote HADR member in the group is
identified as an existing primary server, the check does not allow the local server to be promoted to the primary
server. Generally, you cannot override this check.
If the check fails to connect to one or more remote HADR member, it assumes that the unreachable member
may be a primary server, and refuses to promote the local server to primary. In this situation, you can use the
force parameter to override the split-brain check:
Before using the force parameter, verify that there is no other primary server present in the group.
Note
If your site is configured for Fault Manager, it handles primary SAP ASE failure, and automatic failover is
triggered when it is safe to failover. If the Fault Manager detects potential data loss when fail over is
triggered, you must manually intervene to restore the old primary site or accept data loss and promote the
companion SAP ASE as the new primary SAP ASE. The steps described here are applicable if Fault Manger
is not configured and the database administrator must decide how to recover from an unplanned failover.
If failover fails before the new primary SAP ASE is activated, RMA attempts to set the old primary as primary
again. A failure after this point requires you to manually activate the new primary, start Replication Agent on
the new primary, then execute sap_host_available there when the new standby running.
Manual Failover
A planned failover occurs when you intend to perform a task that requires a node to be brought down. You can
perform manual failover from the command line or from SAP ASE Cockpit.
1. Connect to the primary or companion RMA and issue sap_failover. This example uses a deactivation
timeout of 60 seconds:
sap_failover SFHADR1,SJHADR2,60
TASKNAME TYPE
VALUE
-------------- ---------------------
------------------------------------------------------------------------------
-------------
Failover Start Time Thu Dec 03 20:03:14 UTC
2015
Failover Elapsed Time
00:00:02
DRExecutorImpl Task Name
Failover
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description Failover makes the current standby
ASE as the primary server.
DRExecutorImpl Long Description Started task 'Failover'
asynchronously.
DRExecutorImpl Additional Info Please execute command 'sap_status
task' to determine when task 'Failover' is complete.
Failover Task Name
Failover
Failover Task State
Running
Failover Short Description Failover makes the current standby
ASE as the primary server.
sap_failover is an asynchronous command, and must complete before you preform the next step. You
cannot perform two sap_failover commands in parallel. That is, the first sap_failover command
must complete before you issue a second.
2. Connect to the primary or companion RMA and issue sap_status to check the status of the
sap_failover command:
sap_status task
RMA issues messages similar to this when the failover task is finished (see the bold text):
TASKNAME TYPE
VALUE
---------- ---------------------
------------------------------------------------------------------------------
--------------------------------------------------------------------------
Status Start Time Thu Dec 03 20:03:14 UTC
2015
Log in to the old primary server and verify that its mode and state is standby inactive:
Alternatively, you can connect to the RMA on the primary companion and issue (see the bold text):
sap_status path
PATH NAME VALUE
INFO
--------------- ---------------------- -----------------------
----------------------------------------------
. . .
4. (Optional) Stop the Fault Manager if it is configured to restart SAP ASE, Replication Server and RMA.
Configuring the Fault Manager for unplanned failover and a subsequent automatic restart of these
components can trigger actions that are undesirable during planned failover. Consequently, you should
stop the Fault Manager during any planned activity. From the <install_directory>/FaultManager
directory, issue:
<Fault_Manager_install_dir>/FaultManager/bin/sybdbfm stop
Note
5. After sap_failover successfully completes, it prints a message indicating that you must run
sap_host_available. Issue this command from RMA to clean and disable the old replication path and
activate the new direction for the replication path:
sap_host_available SFHADR1
TASKNAME TYPE VALUE
------------- ---------------------
------------------------------------------------------------------------------
--------------------------
HostAvailable Start Time Thu Dec 03 23:48:34 UTC 2015
HostAvailable Elapsed Time 00:01:24
HostAvailable Task Name HostAvailable
HostAvailable Task State Completed
6. Confirm that replication is active from the SAP ASE Cockpit, or from the RMA by issuing:
sap_status path
PATH NAME VALUE
INFO
--------------- ---------------------- -----------------------
----------------------------------------
. . .
Alternatively, you can confirm the direction of replication from the SAP ASE Cockpit.
7. (Optional) If the Fault Manager is stopped, restart it. From the <install_directory>/FaultManager
directory, issue:
<Fault_Manager_install_dir>/FaultManager/sybdbfm_<CID>
Follow the instructions in Manage SAP ASE > Always-On (HADR) Option > Performing a Planned Failover in the
SAP ASE Cockpit documentation.
Automatic Failover
An unplanned, automatic failover occurs when an event causes a node to go down. The standby server is
automatically promoted to the primary position with an internally executed sap_failover command.
1. Check SAP ASE Cockpit for alerts indicating fault detection, failover initiation, and failover completion.
2. Connect to the standby server and issue the hadr_mode and hadr_state functions to confirm that its
HADR mode and state is now primary-active:
Alternatively, you can connect to the RMA on the primary companion and issue:
sap_status path
You can also use sap_status task from RMA to display the progress of the sap_failover command.
Once the failover is complete, the SAP ASE Cockpit indicates that the SAP ASE running on the first site is
now the primary server.
sap_host_available SFHADR1
4. Confirm that replication is active from the SAP ASE Cockpit, or from the RMA by issuing:
sap_status path
PATH NAME VALUE
INFO
--------------- -------------- --------------------
----------------------------------------
. . .
SetupReplication Task Long Description Setting up
replication between the two ASE hosts 'SFHADR1' and 'SJHADR2' completed
successfully. Databases on 'SJHADR2' are now ready to be materialized.
. . .
Check that:
● The standby server becomes the primary server (with a yellow-colored box).
● The primary server becomes the standby server (with a gray-colored box).
● A green connector joins the primary and standby server. If the connector is not green, check for an error
condition.
● The text and icon for Service Component Status are green and Active.
● The text and icon for Replication Paths Status are green and Active.
● Primary – the member of an HADR configuration on which active transaction processing by user
applications is allowed to take place.
● Standby – the member of an HADR configuration that contains copies of specific databases that originate
on the primary member, and is available to take over transaction processing if the primary member fails.
Replication Server replicates database changes on the primary that are marked for replication to standby
members.
● Disabled – HADR is disabled on this member.
● Unreachable – the local member (the server from which you enter commands) cannot reach this remote
HADR member.
● Starting – HADR member is starting.
Note
When you include the force parameter, the HADR system forcibly terminates all the transactions
started by privileged and unprivileged connections. Unprivileged connections cannot start new
transactions when the server is in the deactivating state.
The internal state of a primary server is not preserved across restarts. However, the external mode is saved
across restarts using the HADR mode configuration parameter.
Use the HADR primary check frequency configuration parameter to determine how often the standby
server checks the primary server's mode and state.
If the standby server detects that the other server is not in the primary mode and an active state, it introduces
a delay before sending the address list used for connection redirection. The length of this delay is determined
by the HADR login stall time configuration parameter. See the Reference Manual: Configuration
Parameters.
There are a number of ways to determine the member's mode and state.
● Use the hadr_mode function and the <@@hadr_mode> global variable to determine the member mode.
The return values for <@@hadr_mode> and hadr_mode are:
-1 HADR is disabled.
2 HADR is enabled, but the server is unreachable. This value is not seen by the
local server.
● Use the hadr_state function and the <@@hadr_state> global variable to determine the member state.
The return values for <@@hadr_mode> and hadr_state are:
2 The server is inactive, and does not allow transaction processing from user
applications.
3 The server is changing from the active to the inactive state, and the log is be
ing drained. Eventually, the mode should transition to the inactive state. If
deactivation times out, the mode may switch back to the active state.
● You can include a return value (-1, 0, 1, 2, or 3) as an input parameter with hadr_mode and hadr_state
functions to determine the state this return value represents (this is the same verbose information that
<@@hadr_mode> and <@@hadr_state> return). For example:
select hadr_mode(1)
------------------------------------------------------------
Primary
● Issuing hadr_mode and hadr_state functions without arguments returns the mode and state of the
server, respectively:
● Issue the HADR mode configuration parameter to determine the current mode of the server (the server
below is in non-HADR mode):
sp_hadr_admin mode
HADR Mode
------------------------------------------------------------
Primary
(1 row affected)
If the start-up sequence described in the previous section is not followed, and if the primary SAP ASE is started
before the standby SAP ASE, it starts in standby mode due to split-brain check.
If this occurs, you can connect to the SAP ASE using the privileged login and change the state to primary. Make
sure that the server you intend to promote to primary was indeed primary earlier by checking its SAP ASE log
file.
sp_hadr_admin primary
go
sp_hadr_admin activate
go
Issue the sp_start_rep_agent system procedure in the master, CID, and in each database that participates
in HADR. For example:
use master
go
sp_start_rep_agent master
go
use <CID>
go
sp_start_rep_agent <CID>
go
use <user_database_1>
go
sp_start_rep_agent <user_database_1>
go
use <user_database_2>
go
sp_start_rep_agent <user_database_2>
go
. . .
Managing data loss and viewing the Fault Manager alerts are important checks to perform after an unplanned
failover.
There are a number of steps you perform to manage data loss after an unplanned failover.
1. Verify that the replication synchronization state of the replication path, at the time failure occurred, is the
synchronous replication state:
sap_status path
If the replication path is in the synchronous replication state, you see output similar to:
If the Synchronization Mode or Synchronization State is Asynchronous and a failover occurs, there is a risk
of data loss because not all data from the primary is guaranteed to have reached the standby SAP ASE. To
guarantee the databases are synchronized when the primary SAP ASE returns to service, rematerialize the
primary SAP ASE.
If the Synchronization State is Synchronous, all data from the primary SAP ASE should have been applied
to the standby SAP ASE. Rematerialization is not required.
2. Execute sap_failover with the unplanned option:
3. Wait for both the former primary SAP ASE and the former primary SAP Replication Server to start and
become available, and ensure that all servers in the HADR system are available for replication.
4. Reconfigure the former primary SAP ASE database as the new standby for the activity occurring at the
former standby SAP ASE database site:
sap_host_available <primary_logical_host_name>
Fault Manager sends alerts for events requiring database administrator attention. After an unplanned failover,
there are a number of alerts you check to determine if the alert notifications are still active.
If the first alert is still active, wait for it to clear; until it is cleared, failover has not successfully completed. If the
second alert is still active after unplanned failover, performing an unplanned failover might cause data loss.
Check for messages in the Fault Manager Messages table in the SAP ASE Cockpit that indicate the alerts are
cleared.
● For the first alert, watch for this message (in green), which indicates the alert stating Failover
initiated from 'site1' to 'site2 is cleared:
● For the second alert, watch for this message (in green), which indicates this alert is cleared:
See the SAP ASE Cockpit > Alerts in SAP ASE for more information.
You can check the status of Replication System using the RMA commands as well as the SAP ASE Cockpit. This
topic provides the information on the RMA commands required to monitor replication system status. For
example, sap_status path, sap_status route, and sap_status resource.
● Check Replication Path Status – Replication paths are used to deliver data changes between the primary
and the standby ASE databases. Each pair of databases (between the primary and the standby SAP ASE
servers) has two replication paths defined. Check the replication path status from the primary SAP ASE
server to the standby SAP ASE server to ensure that the paths are in active state. Use the sap_status
path command to check the status of the path. See sap_status [page 517].
● Check Replication Path Sub Components Status – Each replication path consists of servers (SAP ASE
and Replication Server), threads (SAP ASE RepAgent thread and Replication Server internal threads), and
queues (inbound, outbound, and SPQs). sap_status route allows you to collect the status of these
components. See sap_status route [page 531].
● Check Replication Queues Information – Use the sap_status resource command to check the size
and device buffer usage of device buffer and SPQs. See sap_status resource [page 527].
Procedure
Execute the sap_send_trace command. If you do not specify a database name, a trace is sent to all
databases for that host: master and ERP (if it exists):
This command inserts an rs_ticket into the source database or databases. Latency is calculated from the
most recent entry in the target database's rs_ticket_history table.
The sap_status path command calculates latency based on the most recent trace received at the standby
database. For example:
If there is a backlog of data, the trace element reflected by sap_status path results may not be the most
recent trace element requested. Verify that Time latency last calculated is the current time, and not
reflective of the trace element that was executed earlier.
● If the second truncation point is not set, SAP ASE returns the backlog from the beginning of the log.
● If the second truncation point is set, execute this command on the primary server to view the number of
backlog pages:
For example:
backlogpage X page_size/1024/1024
View the SAP ASE page size with the <@@maxpagesize> global variable (this server uses a 4K page size):
select @@maxpagesize
-----------
4096
Query SAP ASE, Replication Server, RMA, and so on to determine the health of the HADR system.
Generally, you can look at the following to determine the health of the primary and companion servers:
● Log space – issue this in the target database to determine the available space for the logs:
sp_spaceused syslogs
The loginfo and lct_admin functions also display log information. See the SAP ASE Reference Manual:
Building Blocks.
● Replication Agent status – issue this in the target replicated database:
● SAP ASE error log messages – review messages in the error logs.
The default location of the SAP ASE error log is $SYBASE/$SYBASE_ASE/install.
● (On the companion server) Monitoring latency – use the rs_ticket command to view detailed
information regarding latency among the replication server internal threads. See Checking Latency with
rs_ticket [page 339].
● Query RMA – RMA collects all the information about the status of the system. For example, this query
indicates the state of the primary SAP ASE Replication Agent:
select case
when (Status = "sleeping" and SleepStatus = "opening stream") or
(Status = "sleeping" and SleepStatus = "stream sleep") or
(Status = "sleeping" and SleepStatus = " sleeping on log
full” ) then "suspect"
else "active"
end
from master..monRepScanners
where DBID = <dbid>
union
select case when max(Status) is NULL then "down" end from
master..monRepScanners where DBID = <dbid>
go
Generally, you can look at the following to determine the health of Replication Server:
● SPQ size and capacity – issue this command to see the SPQ size:
See Troubleshooting Data That is Not Replicating [page 418] > SPQ is Full for information about fixing a full
SPQ and admin disk_space, mb, spq [page 591].
● Replication Server queue size and capacity – issue this command to determine the Replication Server
queue size and capacity:
admin disk_space
Use the sap_set simple_persistent_queue_size command to change the queue size. See sap_set
simple_persistent_queue_size [page 498].
● DSI status – issue this to determine the status of the DSI threads:
admin who
There are a few situations during which failures may typically occur in the HADR system.
Situations include:
● Failing to add instances to the HADR system – see Troubleshooting the HADR System [page 384] > Failure
to Add an Instance.
● RMA commands fail to run – see Troubleshooting the HADR System [page 384] > RMA Command Failure.
● Failure during setup – see Recovering from a Failed Setup [page 399].
The HADR system prevents data changes from occurring on a companion node by ensuring that nonprivileged
users cannot log into the companion node.
Only users with allow hadr login privilege can login to the companion node. Users without this who
attempt to log in to the companion node are rejected or redirected to the primary node, depending on the
redirection property on the connection. If the redirection property is set, the connection is redirected; if
it is not set, the connection is rejected. Users with the sso_role and sa_role are granted allow hadr login
privilege by default, so administrators can log in to the companion node to perform administrative tasks.
Besides, the following role and permissions are also granted allow hadr login by default:
● js_admin_role
The companion node can be used for read-only access or reporting. The following example describes how to
set up read-only access. Generally, you should have a separate read-only user who is allowed to log into the
companion node but does not have permission to modify data.
Note
The read-only user should not own any objects on either the primary or the standby server because object
owners implicitly have all permissions on their objects, so you cannot restrict them to read-only
permissions.
In this example, objects in the pubs2 database are owned by DBO. User pubs2user is created with privileges
to modify pubs2, while user pubs2rouser is a read-only user.
use master
go
create login pubs2user with password "Sybase123"
go
create login pubs2rouser with password "Sybase123"
go
2. Create the roles and grant privileges (if you created a default profile, see Manage Login Profiles in the SAP
ASE Cockpit help for additional steps):
use pubs2
go
sp_adduser pubs2user
go
New user added.
(return status = 0)
sp_adduser pubs2rouser
go
New user added.
(return status = 0)
When logging into the server, pubs2rouser can now view and alter data:
However, when logging into the database, pubs2ruser can view, but not alter, the data:
Procedure
Execute this from the RMA to add 1 GB of additional storage to replication on the primary site:
Results
The sap_add_device command issues an add partition command to the underlying SAP Replication
Server defined for the requested logical host.
Note
When you enable stream replication, Replication Server automatically creates a simple persistent queue.
By default, this queue consists of two 1000 MB files, but can extend to a maximum of one hundred files
(100 GB of disk space). Use the sap_set simple_persistent_queue_max_size command to restrict
and adjust the maximum amount of disk space allocated for the simple persistent queue.
Replication Servers are configured with file system space that buffer and forward replication data between the
primary and standby sites. In cases of high volume or system outages, you may need to increase the space
used by replication.
● If the replication throughput cannot meet the current primary system demand, the SAP ASE transaction
log may become full while waiting for replication. Adding buffering space to the Replication Servers allows
the SAP ASE logs to truncate more frequently, pushing the data out of the SAP ASE transaction log and
into replication queue storage to remove the risk of a full log in the primary SAP ASE server.
● Primary SAP ASE transaction log space may fill if the standby site is unavailable (due to either planned or
unplanned downtime). The SAP ASE transaction log may become full while waiting for the standby to
return to service. Adding buffering space to the Replication Servers can allow the SAP ASE logs to truncate
more frequently, pushing the data instead into Replication Server queues for storage until the standby
server returns to service and can accept the replication backlog.
Use the sap_tune_rs RMA command to tune and configure the HADR components (such as the primary and
standby SAP ASE, Replication Agent, and Replication Server).
Select the <memory> and <number of cpus> values based on the primary transaction log generation rate.
For example, if the primary log generation rate is 3.5 GB per hour, you can tune Replication Server to have 4 GB
or memory and 2 CPUs so that latency can be less than 5 seconds. If the primary log generation rate is high at
any time (greater than 5 GB but less than 12 GB), then setting memory to 8 GB and CPUs to 4 should keep the
latency to less than 5 seconds.
There are several ways to tune Replication Agent and SAP Replication Server to improve their overall
performance.
In the Replication Agent, the default value for peak transaction threshold is 5, and that for peak
transaction timer is 300 (in seconds). This means that Replication Agent switches to “async” mode if
some commits take longer than 10 seconds (which is the default value for max_commit_wait) for a total of
five times over a period of 300-seconds.
Such mode switches occur if the disks on which SPQ is stored have slower-capacity I/O-writes than other disks
in the system. In this case, set the value of peak transaction threshold to 50, and peak transaction
timer to 120. These changes should prevent the mode from switching.
Use the alter connection command to enable the early dispatch mechanism to handle large transactions
in SAP Replication Server. Tweak values of the following parameters in the command to achieve this:
● Set the value of the parameter parallel_dist to on to enable early dispatch in the SAP Replication
Server. This configuration change automatically suspends and resumes the distributor. However, the
distributor component waits until the SQT database is flushed of any transactions. The following example
enables early dispatch for the pubs2 table on the SFSAP1 server:
The Replication Server error log includes messages similar to this when the distributor thread starts
incorrectly:
● Set the value of the dsi_num_large_xact_threads parameter to 2 (to handle two large transactions
received in parallel).
For example:
○ The value for dsi_large_xact_threads should be equal to the number of expected large
transactions that occur in parallel.
○ The value for dsi_num_threads is equal to the value of dsi_large_xact_threads plus 3. That
is:
dsi_num_threads = dsi_large_xact_threads+3
If you upgrade the primary SAP ASE to version 16.0 SP03 PL11 or later on non-Windows platforms, you should
increase the replication agent memory size by 32 MB per primary database. Use the SAP ASE stored
procedure sp_configure to change the value, for example:
See the Tuning Memory Allocation chapter in SAP Replication Server Administration Guide Volume 2 for more
information.
Although the sap_status path command provides the latency information for each active replication path,
you can also use the rs_ticket command to view detailed information regarding latency among the
replication server internal threads.
1. Log into the primary SAP ASE server with isql as the DR_admin login.
2. Switch to the database for which you need to investigate the replication latency.
3. Issue:
4. Log into the standby SAP ASE server with isql as the DR_admin login.
5. Switch to the database for which you are investigating the replication latency.
6. Issue:
Aug 20 2015 11:05PM Aug 20 2015 11:05PM Aug 20 2015 11:05PM Aug 20
2015 11:05PM Aug 20 2015 11:05PM
(1 row affected)
● Latency from the primary ASE server to the standby ASE server:
○ Calculate by rdb_t – pdb_t
● Latency from the primary ASE server to Replication Server internal threads:
○ Latency to EXEC thread: exec_t – pdb_t
○ Latency to DIST thread: dist_t – pdb_t
○ Latency to DSI thread: dsi_t – pdb_t
Note
rs_ticket command can also be used to check the health of the Replication Server. This mechanism is
referred to as "heartbeat." Write a loop in which rs_ticket is sent from the primary server after a specific
time interval (for example, every 10 minutes), and then continue checking the rs_ticket_history table
on the target or the standby server to verify if the ticket is received. A successfully received ticket indicates
that the Replication Server is functional.
Both the HADR pair and DR node support up to five customized directories for database, transfer log, log,
configuration, and backup files.
Note
You can configure the following parameters in the response file to customize the file directories.
The following tables list SAP Replication Server and Replication Management Agent instance files that are
saved in their corresponding customized directories.
$SYBASE/<SID>/log/ The log file for SAP Replication Server The log file directory
<SID>_REP_PR/ server, <SID>_REP_PR.
<SID>_REP_PR.log
$SYBASE/<SID>/log/ Standard error output for SAP Replica The log file directory
<SID>_REP_PR/ tion Server server, <SID>_REP_PR.
<SID>_REP_PR.stderr
$SYBASE/<SID>/log/ Standard output for SAP Replication The log file directory
<SID>_REP_PR/ Server server, <SID>_REP_PR.
<SID>_REP_PR.stdout
$SYBASE/<SID>/log/ The output file for the dbltm process The log file directory
<SID>_REP_PR/ in the ERSSD.
<SID>_REP_PR_RSSD_ra.out
$SYBASE/<SID>/cfg/ The configuration file for SAP Replica The configuration file directory
<SID>_REP_PR/ tion Server server, <SID>_REP_PR, cre
<SID>_REP_PR.cfg ated by rs_init.
$SYBASE/<SID>/cfg/ The run file for SAP Replication Server The configuration file directory
<SID>_REP_PR/ server, <SID>_REP_PR.
RUN_<SID>_REP_PR.sh
$SYBASE/<SID>/cfg/ The onfiguration file for the dbltm The configuration file directory
<SID>_REP_PR/ process in the ERSSD.
<SID>_REP_PR_RSSD_ra.cfg
$SYBASE/<SID>/database/ The database file for the dbsrv17 The atabase file directory
<SID>_REP_PR_RSSD/db/ process in the ERSSD.
<SID>_REP_PR_RSSD.db
$SYBASE/<SID>/log/ The output file for the dbsrv17 proc The log file directory
<SID>_REP_PR_RSSD/ ess in the ERSSD.
<SID>_REP_PR_RSSD.out
$SYBASE/<SID>/log/ The log file for the backup operation. The log file directory
<SID>_REP_PR_RSSD/
backup.syb
$SYBASE/<SID>/translog/ The log file for the dbsrv17 process in The translog file directory
<SID>_REP_PR_RSSD/ the ERSSD.
translog/
<SID>_REP_PR_RSSD.log
$SYBASE/<SID>/backup/ The backup database file for the The backup file directory for the data
<SID>_REP_PR_RSSD/backup/ dbsrv17 process in the ERSSD. base
<SID>_REP_PR_RSSD.db
$SYBASE/<SID>/backup/ The backup log file for the dbsrv17 The backup file directory for the data
<SID>_REP_PR_RSSD/backup/ process in the ERSSD. base
<SID>_REP_PR_RSSD.log
$SYBASE/<SID>/backup/ The backup mirror file for the The backup file directory for the data
<SID>_REP_PR_RSSD/backup/ dbsrv17 process in the ERSSD. base
<SID>_REP_PR_RSSD.mlg
$SYBASE/<SID>/log/ Log files for the RMA instance. The log file directory
AgentContainer/logs/
RMA_<YYYYMMDD>.log
$SYBASE/<SID>/cfg/ The onfiguration file for the RMA logger. The configuration file directory
AgentContainer/config/
Logger.xml
$SYBASE/<SID>/cfg/ The file that defines the Replication The configuration file directory
AgentContainer/config/ Agent for Oracle configuration proper
RAO.properties ties.
$SYBASE/<SID>/cfg/ The file that defines the Replication The configuration file directory
AgentContainer/config/ Agent for SQLServer configuration
RAS.properties properties.
$SYBASE/<SID>/cfg/ The file that defines the Replication The configuration file directory
AgentContainer/config/ Agent for UDB configuration properties.
RAU.properties
$SYBASE/<SID>/cfg/ The file that defines RepAgent proper The configuration file directory
AgentContainer/config/ ties, used only for the <SID> database,
RA_DB.properties when you set up replication.
$SYBASE/<SID>/cfg/ The file that defines RepAgent proper The configuration file directory
AgentContainer/config/ ties, used only for the master database,
RA_DB_master.properties when you set up replication.
$SYBASE/<SID>/cfg/ The file that defines replication server The configuration file directory
AgentContainer/config/ properties when replication is set up.
RS.properties
$SYBASE/<SID>/cfg/ The file that defines SAP Replication The configuration file directory
AgentContainer/config/ Server database connection properties
RS_DB.properties when replication is set up.
$SYBASE/<SID>/cfg/ The file that defines SAP Replication The configuration file directory
AgentContainer/config/ Server database connection properties,
RS_DB_master.properties used only for the master database,
when you set up replication.
$SYBASE/<SID>/cfg/ The configuration file for starting an The configuration file directory
AgentContainer/config/ RMA instance.
bootstrap.prop
$SYBASE/<SID>/cfg/ The configuration file for the CSI. The configuration file directory
AgentContainer/config/
security/csi.xml
$SYBASE/<SID>/cfg/ The configuration file for roles. The configuration file directory
AgentContainer/config/
security/roles.cfg
$SYBASE/<SID>/cfg/ The encrypted password of user 'sa'. The configuration file directory
AgentContainer/config/
security/sa.pwd
$SYBASE/<SID>/database/ Database files for the derby database. The database file directory
AgentContainer/configdb/
rsgerepo/seg0/*.dat
$SYBASE/<SID>/database/ A text file with internal configuration in The database file directory
AgentContainer/configdb/ formation for derby.
rsgerepo/
service.properties
$SYBASE/<SID>/log/ The log file for derby. The log file directory
AgentContainer/configdb/
derby.log
$SYBASE/<SID>/backup/ A backup file. The backup file directory for the data
AgentContainer/backups/ base
RepoBackupXXXXXXXXXXX.full
$SYBASE/<SID>/backup/ A backup file. The backup file directory for the data
AgentContainer/backups/ base
RepoBackupXXXXXXXXXXX.diff
$SYBASE/<SID>/backup/ The repository catalog. The backup file directory for the data
AgentContainer/backups/ base
repository.catalog
SQL statement replication replicates batched SQL statements in stored procedures, which complements log-
based replication and addresses performance degradation caused by batch jobs.
In SQL statement replication, SAP Replication Server receives the SQL statement that modified the primary
data, rather than the individual row changes from the transaction log.
See SQL Statement Replication in SAP Replication Server Administration Guide Volume 2 for more details. In
the HADR environment, SAP Replication Server and RepAgent commands are wrapped within RMA commands
to provide a more straightforward way to manage SQL statement replication.
The SQL statement replication tasks described here all use the sap_sql_replication RMA command.
Related Information
Use the on parameter of the sap_sql_replication RMA command to enable SQL statement replication for
either the database or table level.
Procedure
<option> ::= { U | D | I | S }
○ <database> | All
To enable SQL statement replication for a specific database, specify the <database> parameter. Use All
to enable the whole HADR environment.
○ <option>[<option>][…]
The DML operations you want to enable in SQL statement replication:
○ U – update
○ D – delete
○ I – insert select
○ S – select into
○ <table>[,<table>][,…]
Enable SQL statement replication for the tables you specify.
Note
See sap_sql_replication [page 512] for all the configuration requirements when specifying a table in
this command.
This example replicates update, delete, and insert select statements as SQL statements for tables as specifies:
TASKNAME TYPE
VALUE
--------------- ---------------------
-------------------------------------------------
SQL Replication Start Time Thu Sep 13 02:28:33 UTC
2018
SQL Replication Elapsed Time
00:00:00
SQLReplication Task Name SQL
Replication
SQLReplication Task State
Completed
SQLReplication Short Description Toggle SQL Replication in the
system
Related Information
Use the threshold parameter of the sap_sql_replication RMA command to define when SQL statement
replication triggers.
Context
The threshold value is the minimum number of rows affected by a SQL statement when SQL statement
replication is triggered. By default, SQL statement replication is triggered when the SQL statement affects
more than 50 rows. You can adjust the value of threshold according to your needs. You can only set different
threshold values at the database level.
Procedure
Set the threshold by specifying the threshold parameter in the sap_sql_replication command:
○ <database> | All
To set the threshold for a specific database, specify the <database> parameter. Use All to set the
threshold for the whole HADR environment.
○ <value>
The <value> parameter defines the minimum number of rows affected by a SQL statement when the SQL
statement replication is triggered.
Related Information
Use the off parameter of the sap_sql_replication RMA command to disable SQL statement replication
for either the database or table level.
Procedure
<option> ::= { U | D | I | S }
○ <database> | All
To disable SQL statement replication for a specific database, specify the <database> parameter. Use All
to disable the whole HADR environment.
○ <option>[<option>][…]
The DML operations you want to disable in SQL statement replication:
○ U – update
○ D – delete
Note
See sap_sql_replication [page 512] for all the configuration requirements when specifying a table in
this command.
This example disables the replication of update and delete statements as SQL statements for the ERP
database:
Related Information
Use the display parameter of the sap_sql_replication command to display SQL statement settings,
such as the value of threshold and the tables that are enabled or disabled with SQL statement replication.
Procedure
You can view the value of threshold and tables enabled or disabled with SQL statement replication for
corresponding DML operations:
○ All – SQL statement replication for all tables are enabled for the corresponding DML operation.
○ None – SQL statement replication for all tables are disabled for the corresponding DML operation.
This example displays the SQL statement settings for the database ERP_1:
You can view the value of threshold and tables enabled or disabled with SQL statement replication for
corresponding DML operations:
○ In-List – SQL statement replication for corresponding DML operation is enabled for tables listed in the
TABLE_LIST column.
This example displays the SQL statement settings for the database ERP:
You can view the value of threshold and tables enabled or disabled with SQL statement replication for
corresponding DML operations:
○ Out-List – SQL statement replication for corresponding DML operation is disabled for tables listed in the
TABLE_LIST column. Tables in the database that are not listed in the TABLE_LIST column still use SQL
statement replication.
Related Information
Manage RMA configuration files at the global and the instance levels.
RMA has two level of configuration files. One is at the global level under the RMA-16_0/config directory, the
other is at the instance level under the RMA-16_0/instance/AgentContainer/config directory. Changing
the parameters in the global configuration file changes the configurations for all RMA instances under the
same RMA installation. Changing the parameters in the instance configuration file changes the configuration
for the specific RMA instance.
Note
In the HADR environment, only one RMA instance is created under an RMA installation.
You should configure parameters using the instance configuration file because:
● The configurations in the instance file take precedence over the configurations in the global file. When RMA
is executing, it checks the configurations in the instance file first and then the global file.
● In a rolling upgrade, the installer overwrites the configuration file under the global level, but not the
configuration file under the instance level.
If the customized directory is enabled, the customized configuration directory is used instead of the instance
configuration directory. RMA checks the configurations in the customized configuration file first and then in the
global configuration file.
XA transactions are distributed transactions that are coordinated by an external transaction manager using X/
Open XA protocol, such as CICS, Encina, and TUXEDO. Replication of transactions that are coordinated by
MSDTC is not supported in HADR.
You can enable XA transaction replication during the setup of an HADR system or by using the
sap_xa_replication command after an HADR system is created. When XA transaction replication is
enabled, the server level configuration parameter enable DTM will be set to 1 (enabled) at both the primary
and replicate SAP ASE servers automatically and the dtm_tm_role will be also granted to the DR_maint user.
During failover, transactions in the prepared state are drained from the inbound queue to the standby SAP ASE.
These transactions are applied to the new primary database (former standby) after the replication is enabled
to make the transaction state replicable to the new standby (former primary), all of which happens before the
new primary database is activated to accept new client data. Therefore, the failover in the HADR system with
XA transactions might take longer to complete depending on the amount of data to be applied.
Restrictions
XA transaction replication requires the CI version of both RepAgent and SAP Replication Server to be at least
1.17. There are two ways to enable XA transaction replication.
Using setup_hadr.rs
# If XA replication is enabled
#
# Valid values: true, false
xa_replication=true
2. Grant the dtm_tm_role to the transaction manager user used to log in to SAP ASE.
Another way to enable XA transaction replication is to use the sap_xa_replication command in an existing
HADR system as follows:
1. Grant the dtm_tm_role to the transaction manager user used to log in to SAP ASE.
2. Connect to the primary RMA and run the following command:
sap_xa_replication on
Procedure
sap_xa_replication off
HADR supports the replication of administration commands. The commands that can be replicated include
update statistics and delete statistics for now.
Enabling Replication
The replication of administration commands is disabled by default. You can enable the replication by enabling
the replicate admin commands and dsi_apply_admin_sqlddl parameters through RMA.
1. Log in to RMA.
2. Enable the two parameters as follows:
○ To enable the replicate admin commands RepAgent parameter for a specific database, run:
See sap_configure_rat [page 444] and sap_configure_rs [page 447] for more information about the
parameters and usage.
Disabling Replication
To disable the replication of administration commands in HADR, set replicate admin commands and
dsi_apply_admin_sqlddl to false and off respectively.
1. Log in to RMA.
2. Disable the two parameters as follows:
○ Run the following command to disable replicate admin commands:
You can develop client applications that support the HADR functionality using SDK 16.0 SP02.
SDK 16.0 SP02 supports SAP ASE high-availability disaster recovery (HADR) through OCS, SAP jConnect for
JDBC 16.0 SP02 (SAP jConnect), and SAP ASE ODBC Driver 16.0 SP02.
● Primary server: One server is the designated primary, and all transaction processing by user applications
takes place on the primary.
● Warm standby: The second server acts as a warm standby to the primary server.
If the state of the primary server changes to deactivated, the standby is activated and becomes the new
primary server. During the deactivation process, SAP ASE notifies client applications of its state changes. The
notifications allow clients to act on state changes. For example, a client can stop initiating new transactions
until it receives a message saying the new primary is activated.
To support HADR functionality, client applications can use the SAP jConnect and SAP ASE ODBC Driver
features described here.
SAP jConnect provides special connection properties and state change messages for SAP ASE high-availability
disaster recovery (HADR).
The HADR_MODE property lets you enable or disable HADR features. By default, HADR mode is disabled. Valid
settings for this property include:
Note
For HADR_MODE = RECONNECT, SAP jConnect internally makes REQUEST_HA_SESSION = true. In this
case, the client application must set the SECONDARY_SERVER_HOSTPORT connection property. The
SECONDARY_SERVER_HOSTPORT connection property value specifies the companion server address. When
the primary server is down, the client application connects to the companion server. If
SECONDARY_SERVER_HOSTPORT value is not provided, the following error message is displayed:
JZ0F1: SAP Adaptive Server Enterprise high-availability failover connection was
requested but the companion server address is missing.
If the client application uses HADR_MODE = RECONNECT and explicitly sets REQUEST_HA_SESSION =
false, SAP jConnect internally over-rides the client setting and sets REQUEST_HA_SESSION = true.
The different states of the HADR server that the client application can receive are:
● NONE – indicates that the HADR server does not support the HADR feature or the client has set connection
property HADR_MODE=NONE/null.
● ACTIVE – indicates that the current connection is to the active primary server and the client can perform
any operation.
● DEACTIVATING – indicates that the server is undergoing deactivation and the client application must not
perform any new operation using the current connection. If connection property is
HADR_MODE=RECONNECT/NOKILL/NOKILL_WITH_MAP, then the active transactions can be extended. No
new transaction can be started in DEACTIVATING state. If client tries to perform any new operation a
SQLException is thrown with error code:2377.
● DEACTIVATED – indicates that the server was successfully deactivated and no new operation can be
performed using the current connection. If connection property is HADR_MODE=RECONNECT/NOKILL/
NOKILL_WITH_MAP, then the connections are intact but are not usable. If the client tries to execute any
query a SQLException is thrown with error code: 2379.
If connection property is HADR_MODE=MAP/NONE, and the client tries to perform operations in this state
then the connection is terminated.
You can retrieve the server state change messages in these two ways:
To retrieve the current HADR server state, client applications must pass the HADR_CURRENT_STATE string
parameter using the getClientInfo() API.
Example
In this example, the driver does not make a round trip to the server to retrieve the state change messages.
Instead, it reads the outbound messages sent by the server whenever the state change occurs:
To retrieve the server state change messages using the getClientInfo(), refer to the sample code from the
HADRApp.java file:
System.out.println("Sleeping...");
Thread.sleep(1000);
}
catch (SQLException sqlEx)
{
// Gets current HADR server state
String hadrState = getCurrentHADRState(conn);
To retrieve the server state change messages using the SybMessageHandler interface, the client application
implements the SybMessageHandler:
import com.sybase.jdbcx.SybMessageHandler;
public interface SybMessageHandler
{
public SQLException messageHandler(SQLException sqe);
Set HADR_MODE to MAP or NOKILL_WITH_MAP in SAP jConnect to extract HADR_LIST_MAP from these
properties.
When you set HADR_MODE to MAP or NOKILL_WITH_MAP, SAP jConnect receives the HADR_LIST_MAP during
login and whenever there is a topology change. To retrieve HADR_LIST_MAP, an SAP jConnect application calls
the SybConnection.getClientInfo() method. SybConnection.getClientInfo() returns the property
object.
The client application extracts HADR_LIST_MAP from these properties, which returns:
To retrieve the HADR_LIST_MAP components from this LinkedHashMap, retrieve these keys:
GroupName
GenerationNumber
Primary
Standby_1
The SAP ASE ODBC driver provides special connection properties, informational messages, and API support
for SAP ASE high-availability disaster recovery (HADR). Use these new features to write robust database
applications and take advantage of the HADR system to stay always-on.
Several properties in the SAP ASE ODBC driver support the HADR functionality of SAP ASE.
● DRNoKillDuringDeactivation – when this property is set to 1 (the default is 0), the SAP ASE ODBC
driver requests the SAP ASE to not terminate the connection when the primary server is in a deactivated
state or is undergoing deactivation.
Applications that do not monitor the information messages are also notified of some of the events as errors
when executing statements.
The connection property DRNoKillDuringDeactivation and HADRList control the nature of the messages:
9.2.2 Use the SAP ASE ODBC Driver to Get HADR State
Change Messages from SAP ASE
SAP ASE notifies ODBC applications via the SAP ASE ODBC driver when an SAP ASE server starts to
deactivate, cancels an ongoing deactivation process, transitions to a deactivated state, completes the failover,
or transitions from a deactivated state to an active state.
Applications can use the SQLGetConnectAttr() API to get the HADR state change messages (server state).
Applications can directly be in sync with the server (status) to avoid disaster and be aware of planned
upgrades. Getting a server state helps database applications either identify whether the server is undergoing
planned events (upgrade/maintenance) or detect disaster events and act accordingly, all without restarting the
whole application. The application can always be connected and keep serving the client request as per server
availability. The application is also notified about reconnections to the site that has taken over (referred as
failover) in case of disaster or site upgrades. In a failover, the application reestablishes the context and
continues with the client requests that could not be processed because of the failover.
Using the following connection attributes, an application can monitor the current state of the server:
● SQL_ATTR_DR_INFO_MSG – retrieves the current state of the connection. An application can poll the
connection by calling the SQLGetConnectAttr function and passing in the SQL_ATTR_DR_INFO_MSG
connection attribute. The value is set to a SQLINTEGER value of the most recent informational message
received:
○ SQL_DR_ACTIVATE– indicates that the server is in an active state and can process the client requests.
○ SQL_DR_DEACTIVATED – indicates that the server is deactivated (or inactive) and unable to serve
client requests.
○ SQL_DR_DEACTIVATION_CANCELED– indicates that the deactivation is canceled and the server is
back to active state.
Example
The first call to the SQLGetConnectAttr () API returns the most recent state of the ASE in the <hadr_status>
variable. For subsequent calls <SQL_DR_REACTIVATED> and <SQL_DR_DEACTIVATION_CANCELED> are
reported as <SQL_DR_ACTIVE>. When the failover in the driver is complete, the <SQL_DR_FAILOVER> state is
reported as <SQL_DR_ACTIVE>. For more about application failover, see Application Failover [page 364].
SQLINTEGER connectionState;
SQLGetConnectAttr(connection_handle, SQL_ATTR_DR_INFO_MSG, hadr_status,
sizeof(SQLINTEGER), SQL_NULL_HANDLE)
● SQL_ATTR_DR_INFO_CALLBACK – applications that link directly to the SAP ASE ODBC driver can avoid
polling by registering a callback function using SQLSetConnectAttr and setting
SQL_ATTR_DR_INFO_CALLBACK to the address of the callback function. This function is called when an
HADR informational message is received. The callback function is not called on inactive connections
because the connection is not proactively monitored. Messages are received when the application
executes a statement or fetches rows from the result set. The syntax for the state events callback function
is:
Example
Connections that enable the HADRList property receive an HADR data source list from the server upon login
and any time the HADR data source list changes.
The data source list contains the current primary server (listed first), followed by all available standby servers.
Each data source enumerates a list of addresses (which refer to the same server) and a list of high-availability
companion data sources available for that data source. To retrieve these messages, SAP ASE ODBC driver
applications can poll the connection by calling the SQLGetConnectAttr function, which uses the following
connection properties:
Where:
○ <conn> – is the connection handle on which the message was received.
○ <generation_number> – is the generation number of the new list that determines whether the
application retrieves the new list or already has it from a different connection.
○ <size_needed> – is the amount of memory needed to hold the new list.
When the callback function is called, the application, if it decides to update its list, may call the
SQLGetConnectAttr function and retrieve the SQL_ATTR_HADR_LIST attribute to get the new list.
DataSourceList structure
struct SQLHADRDataSourceList
{
// The generation number of this list
SQLINTEGER generation;
// The number of address for this data source. Each address refers
// to the same data source (server)
SQLLEN number_of_addresses;
// An array of size number_of_addresses containing pointers to each
// address in the array. The addresses are in the same format as
// addresses in the interfaces file. Regardless of the setting of
// SQL_OUTPUT_NTS, the addresses are null terminated.
SQLWCHAR** address_list;
// An array of size number_of_addresses containing the byte length of
// each element in the address_list array.
SQLLEN* address_list_lengths;
// The number of HA companions available for this data source.
SQLLEN number_of_ha_companions;
// An array of size number_of_ha_companions containing pointers to
// the SQLHADRDataSource for each of the HA companion servers.
SQLHADRDataSource** ha_companion_list;
// This SQLINTEGER is treated as a set of flags for the data source.
// Currently, the only flag defined is SQL_DR_READONLY.
SQLINTEGER flags;
};
Application failover is the reconnection of user applications to a standby server that has been promoted to the
primary role upon a failure of the previous primary server, or its planned designation to a standby role for
maintenance purposes.
Application failover is not triggered on inactive connections because the connection is not proactively
monitored by the driver. Reconnection to a new primary does not happen unless the application executes a
statement (a SQL query). When the failover is complete, the driver fails the statement execution by sending an
error with native message number 30130:
The server is not available or has terminated your connection, you have been
successfully connected to the next available HA server. All active transactions
have been rolled back.
Make sure to explicitly migrate the context; this message indicates that the query execution failed because the
successful failover in HADR system does not migrate the context. If query execution fails with HA failover
success error, the application must reset the context. If failover is unsuccessful for any other reason, the
application receives an error with native message number 30131:
Connection to the server has been lost, connection to the next available HA server
also failed. All active transactions have been rolled back.
Note
A successful failover in an HADR system does not migrate the context. The application has to reset the
current database, any set options, client language, and character sets. All context information from the
To configure the ODBC driver to handle planned and unplanned failover, set HADRMode=1. When the
HADRMode property is set to 1 (the default is 0), the SAP ASE ODBC driver enables
DRNoKillDuringDeactivation, HADRList, and HASession connection properties to handle the HADR
planned and unplanned failover events in the ODBC driver.
Use the HADR primary wait time configuration parameter to determine the amount of time, in seconds,
the standby server continues to send the redirect list to the clients in absence of primary server before failing
the connection. See the Reference Manual: Configuration Parameters.
Planned failovers in an HADR system allow the standby site to take over so that the primary site can be
released for maintenance purposes.
Context
Procedure
1. Deactivate the current primary site. When deactivation starts, the server state changes to deactivating.
When the deactivation is successful, the server state changes to the deactivated state. Fetching the server
state returns SQL_DR_DEACTIVATING or SQL_DR_DEACTIVATED, depending on the server state.
Applications cannot start new transactions when the server is in deactivating or deactivated states; doing
so results in an error. Applications have to wait and keep polling the server state until there is an active
primary server. To deactivate the current primary site, after a failed query execution:
a. The application checks to see whether the execution failed because of a planned or unplanned HADR
event.
b. While the server is in the deactivating or deactivated state, the application continues to fetch the
server state until the state changes from deactivating or deactivated, to some other state.
2. There are three ways in which the application may connect to the active primary server. The application
identifies one of the following scenarios and proceeds so that:
An unplanned failover occurs when there is a crash or a fault in the primary server and the secondary server
takes over the role of primary server, to allow normal use to continue.
If the primary server is down and the application executes a statement, the ODBC driver tries to find a server
that has been promoted to the active primary role. The new active primary server may be one of these:
If there is no primary server, the driver continues to search for a new active primary server until the time-out is
reached (default is 5 minutes). To change the default time-out value, use the server configuration option HADR
primary wait time.
Note
The timeout starts the moment the server goes down and not when the client application executes the
query.
If the primary server crashes while a planned failover is in progress, the ODBC driver reports the server state as
SQL_DR_CONNECTION_LOST. Upon receiving the state change message, the application executes a statement
so the driver connects to the new active primary server. If the new active primary server is unavailable, the
ODBC driver continues to search for the new primary server.
After the failover is complete, the ODBC driver fails the statement execution with an HA failover error. The
application resets the context when the driver throws an HA failover success error.
Note
If the primary server is down at the time of initial connection, the driver tries to connect to the secondary
server. In such cases the application must set the secondaryhost and secondaryport connection
properties. For an HADR system, the secondary server is the standby server.
1. Configure the HADR primary wait time option to the appropriate value.
2. Configure the application to set the secondaryhost and secondaryport connection properties.
Applications can fetch error messages to determine whether the query execution failed because of an HADR
event, such as deactivate or failover, and so on.
If applications do not want to perform a search for error codes, they can rely on the callback function, which
notifies applications about any changes in state.
Set the callback function for informational messages using the SQLSetConnectAttr
(SQL_ATTR_DR_INFO_CALLBACK) API. Within this callback function, set the global Boolean variable to true
to indicate that the server state has changed. If the statement execution fails and the global Boolean variable is
set, that means the state has changed and application needs to handle the new state. If the query execution
fails and the state has not changed, then there is some other error that the application must handle.
Example
Procedure
1. Add a wrapper to the SQLPrepare() function and maintain a list of prepared statements. For example:
Sample codes show how planned and unplanned failovers are handled.
The application creates an unprivileged connection and sets the application context using the
SetAppContext() function. The application executes an update query for its entire lifecycle and handles the
HADR events:
while (!executeQuery)
{
sr = SQLExecute(stmt);
if (sr == SQL_SUCCESS || sr == SQL_SUCCESS_WITH_INFO)
{
return sr;
}
if (sr == SQL_ERROR)
{
if (server_state_changed)
{
server_state_changed = false;
failover_completed = false;
SQLGetConnectAttr(dbc, SQL_ATTR_DR_INFO_MSG, &connection_state,
sizeof(connection_state), 0);
while (connection_state == SQL_DR_DEACTIVATED || connection_state ==
SQL_DR_DEACTIVATING)
{
cout << "wait server is deactivated" << endl;
Wait(2);
//user could wait for more time if they want to
SQLGetConnectAttr(dbc, SQL_ATTR_DR_INFO_MSG, &connection_state,
sizeof(connection_state), 0);
For the complete code, refer to the hadrapp sample in the SDK.
SAP CTLIB provides special connection properties to support the SAP ASE high availability disaster recovery
(HADR).
The following context/connection level properties in the SAP CTLIB support the HADR functionality of ASE:
● CS_PROP_REDIRECT – This property is enabled by default. When enabled, it allows the standby server to
redirect the connection to an alternate server (cluster) or to the primary server in HADR topology.
● CS_HAFAILOVER – This property is disabled by default. When enabled, an HA aware client can failover to
an alternate server in case a planned or unplanned failover event takes place in the HADR system.
Using the context/connection level properties that support the HADR functionality, you can control the
behavior of the server with respect to the client.
● The CS_PROP_REDIRECT property is enabled and set to CS_TRUE by default. In this case, when a client
attempts to log onto a standby server, it is redirected to the primary server in the HADR system and a
connection is established with the active primary server.
To disable login redirection, first disable the CS_HAFAILOVER property and then set the
CS_PROP_REDIRECT property to CS_FALSE.
● The CS_HAFAILOVER property is set to CS_FALSE by default. To enable the CS_HAFAILOVER property, set
it to CS_TRUE. When enabled, an HA aware client can failover to an alternate server in a planned or
unplanned failover. In a failover event, if the CS_HAFAILOVER property is disabled, the client does not
failover to the standby server and the connection is terminated.
Note
When you enable the CS_HAFAILOVER property, the CS_PROP_REDIRECT property is also enabled by
default.
● The CS_PROP_EXTENDEDFAILOVER property is set to CS_TRUE by default, but it is used only when the
CS_HAFAILOVER property is set to CS_TRUE. When enabled, the client receives a list of network addresses
from server that the client must use for failover instead of relying on information initially retrieved from the
directory service layer.
You can set these properties at both the connection and context levels. To set a property at a connection level,
use the ct_con_props() function. When you set at a property at the connection level, it is applicable only for
that connection. Similarly, to set a property at a context level use the ct_config() function. When you set a
property at the context level, it is set for every connection that is created under that context.
In the following example, the ct_config() function enables the CS_PROP_REDIRECT property for a single
connection and the ct_con_props() function disables the CS_FAILOVER property at the context level:
Failover is the reconnection of applications to a standby server, which has been promoted to the primary role
upon a failure of the previous primary server, or its planned designation to a standby role for maintenance
purposes.
Failover can be planned or unplanned. In a planned failover, the primary server is set as standby for
maintenance purposes and the standby server is promoted to the primary role. Unplanned failovers usually
occur when there is a crash or a fault in the primary server and the secondary server takes over the role of
primary server to allow normal use to continue.
Failover events are not proactively monitored in CTLIB and the failover on the server side does not result in
failover on the client side. The failover on the client side is triggered only when the client application attempts
to perform any network interaction with the server. In this case the client application receives an HA failover
error message, if a client error message callback handler is installed.
In case of successful failover event, the CS_RET_HAFAILOVER return value is returned by the attempted API
operations, such as ct_result(), ct_send(), ct_fetch() or any routine, which performs network
interaction.
The CS_RET_HAFAILOVER return value is returned from the API call during a synchronous connection. In an
asynchronous connection the APIs return the CS_PENDING value to the caller and the operation is performed
asynchronously. Use the ct_poll() function to obtain the status of a last asynchronous operation. In the
event of a failover, the ct_poll() function returns CS_HAFAILOVER. Depending on the return code; perform
the required processing, such as sending the next command to be executed.
SAP jConnect and the SAP ASE ODBC driver provide support for high-availability applications to run with SAP
ASE servers participating in an HADR system.
SAP jConnect
SAP jConnect supports high-availabilty (HA) applications to run with an HADR system. Existing HA client
applications can use an HADR system without modifications. For more information about HA, see the SAP
jConnect for JDBC Programmers Reference > Programming Information > Database Issues > Failover Support.
The legacy HADR application behaves in the following way with the HADR server:
● Server in standby inactive state (no active primary) – the client application cannot connect to a server that
has no active primary server in the topology. The client application gets a login failure exception, with the
following error message:
JZ00L: Login failed.
9668: 01ZZZ Login failed. Adaptive Server is running in 'Standby' mode. The
user login does not have 'allow hadr login' privilege and login redirection
cannot occur since there is no Active Primary.
010HA: The server denied your request to use the high-availability feature.
Please reconfigure your database, or do not request a high-availability session.
● Server in primary inactive state:
○ The client application cannot connect to a server in a primary inactive state until the server is made
active primary.
○ After connecting to the server, the client applications can execute any queries.
○ The client application becomes unresponsive when the server is in the primary inactive state and
throws a SQLException.
● Server in primary active state – When the server is in the primary active state the client application can
successfully connect to the server.
● Primary active server undergoes deactivation – if the primary active server undergoes deactivation and the
client application tries to execute a query, the client application becomes unresponsive until the primary
server is reactivated. If the standby server is promoted to the role of primary active, then the client
application gets a failover SQLException with error code JZF02. In this case, the client application has to
re-create all context object such as statement, prepared statement, callable statement, and so on, and re-
execute the failed transaction/query.
An HA application is one that sets the HASession connection property to 1 and handles the failover error
returned after an HA failover has successfully completed. For more on HA and HASession, see the Adaptive
Server Enterprise ODBC Driver by Sybase Users Guide for Microsoft Windows and UNIX > Failover in High
Availability Systems in the Software Developer's Kit documentation set.
DSN=HADRPrimaryServer;UID=UnPrivilegedUser;PWD=HADRPWD123;SecondaryServer=localho
st;
SecondaryPort=1600;HASession=1;
When a query is executed on the primary deactivated server, the query is blocked by the server until there is an
active primary server. The query execution proceeds normally when the same server is reactivated. If there is
failover, the server fails the command with error codes 2379 and 2376. As a part of processing response of the
query, the ODBC driver fails over to the new primary server. After the successful failover, the ODBC driver
generates the HAFailover error with the native message number 30130. The ODBC application resets the
application context and re-executes the failed query/transaction. To reset the application context and re-
execute the failed query/transaction, set the HASession connection property to 1 so that the ODBC driver
fails over to the new primary server.
Note
The default value of the commandtimeout connection property is 30 seconds. The ODBC driver cancels
the blocked commands after it reaches this time-out value. To delay the cancellation of the blocked
commands, adjust the value of the commandtimeout connection property to a higher value.
SAP CTLIB
SAP CTLIB provides compatibility support for existing HA and cluster applications. To use the high availability
features in your applications, enable the CS_HAFAILOVER property. These applications require minimal or no
modifications to run against the HADR servers.
The SAP jConnect and the SAP ASE ODBC driver provide support for Cluster Edition applications to run with
SAP ASE servers participating in an HADR system.
For details on the SAP ASE Cluster Edition, see the Cluster Edition Cluster Users Guide.
SAP jConnect supports the SAP Adaptive Server Enterprise Cluster Edition to run with HADR system, where
multiple SAP ASE servers connect to a shared set of disks and a high-speed private interconnection. This
allows SAP ASE server to scale using multiple physical and logical hosts.
For more information about HA see the SAP jConnect for JDBC 16.0 Programmers Reference SAP jConnect for
JDBC 16.0 > Programming Information > Advanced Features.
Use the connection string to enable connection failover by setting REQUEST_HA_SESSION to true, where
server1:port1, server2:port2, ... , serverN:portN is the ordered failover list:
Example
URL="jdbc:sybase:Tds:server1:port1,server2:port2,...,
serverN:portN/mydb?REQUEST_HA_SESSION=true"
SAP jConnect tries to connect to the first host and port specified in the failover list. If unsuccessful, SAP
jConnect goes through the list until a connection is established or until it reaches the end of the list.
DSN=HADRPrimaryServer;UID=UnPrivilegedUser;PWD=HADRPWD123;AlternateServers=localh
ost:1600;
HASession=1;
When a query is executed on the primary deactivated server, the query is blocked by the server until there is an
active primary server. The query execution proceeds normally when the same server is reactivated. If there is a
failover, the server fails the command with error codes 2379 and 2376. As a part of a processing response of
the query, the ODBC driver fails over to the new primary server.
After the successful failover, the ODBC driver generates the HAFailover error with the native message
number 30130. The ODBC application resets the application context and re-executes the failed query/
transaction. To reset the application context and re-execute the failed query/transaction, set the HASession
connection property to 1 so that the ODBC driver does not fail over to the new primary server.
Note
The default value of the commandtimeout connection property is 30 seconds. The ODBC driver cancels
the blocked commands after it reaches this time-out values. To delay the cancellation of the blocked
commands, adjust the value of the commandtimeout connection property to a higher value.
SAP jConnect and the SAP ASE ODBC driver receive messages about HADR from SAP ASE.
2379 Primary has been Available through: Client application should roll
deactivated. back any open transactions
● ODBC –
and avoid any new
SQL_ATTR_DR_INFO_MS
transactions.
G or
SQL_ATTR_DR_INFO_CA
LLBACK.
● jConnect –
SybConnection.getCl
ientInfo()
SQL_ATTR_DR_INFO_CA
LLBACK
● jConnect –
SybConnection.getCl
ientInfo()
The SAP ASE HADR system is built on top of SAP Replication Server technology.
Replication Servers used in an HADR system are embedded in SAP ASE and use synchronous replication mode
when configured for high availability (HA). SAP ASE uses asynchronous replication mode when the cluster is
configured for disaster recovery (DR).
SAP ASE topology includes one primary SAP ASE and one standby SAP ASE (called the "companion" in HA
configuration). Applications access data on the primary SAP ASE. Administrator users may connect to either
SAP ASE server, but ordinary users are either rejected or redirected to the primary SAP ASE when they
attempt to connect to the standby SAP ASE.
SAP ASE HADR configuration and administration is performed using the sp_hadr_admin system procedure.
Most of its commands are designed for SAP ASE to use internally. Use only the documented sp_hadr_admin
parameters, and use them carefully.
Replication topology consists of one source and one target SAP ASE. The RMA module manages SAP ASE and
Replication Server state transitions using procedures that start with sap_ (see RMA Commands [page 440]).
Many of these procedures are designed for internal use; use only the documented sap_ interfaces for
administration and monitoring.
Use the SAP installer or the setuphadr utility to configure an HADR cluster: Do not use sap_ commands or
the sp_hadr_admin system procedure to perform this task.
SAP ASE Cockpit provides cluster-wide state monitoring and the ability to administer some HADR functionality.
The SAP Host Control module runs under sudo (root) privilege (on UNIX platforms) on SAP ASE hosts, and
provides services to the Fault Manager for running database and operating system commands.
When SAP ASE is in synchronous replication mode, committed transactions are sent to Replication Server on
remote hosts, apart from writing to the database log device. The replication state may temporarily switch to an
asynchronous state in synchronous mode if there is no response from the remote Replication Server within the
amount of time specified by the max commit wait Replication Agent configuration parameter to prevent
application performance due to temporary glitches in network connectivity between the primary SAP ASE and
the remote Replication Server. However, automatic failover by the Fault Manager is disabled until SAP ASE
resumes replication back to a synchronous state after catch-up. Set max commit wait to a high value to
ensure zero data loss.
When the replication mode and state are synchronous, transactions are committed by SAP ASE after receiving
notification from the remote Replication Server that the data modifications have been written to a persistent
storage device that provides protection against data loss in the event of host or site failure.
10.1 Connections
● A local connection to the local Replication Server on the host on which it is running (connection
Site_A.db in the image below).
● A remote connection to the remote Replication Server on the opposite host (connection Site_A_R2.db in
the image below).
During certain failure conditions, local connections may be used with an external Replication Server when the
HADR cluster replicates to a third site (for example, such as a reporting database in SAP IQ).
Local multi-site availability replication definitions (repdefs) and subscriptions are similar. Both use:
● The same source names: <CID>_<site_name>_repdef. Neither name includes the _R1 or R2 suffices
because they are local.
● The route created between the primary and companion Replication Server for communication. The naming
convention is <CID>_REP_<site> (for example, HA1_REP_SITE01).
The SAP HADR system does not currently use the connection between the two Replication Servers
(HA1_REP_SITE01 in the image above).
Each SAP ASE server has a proxy connection with the other SAP ASE server in the HADR system. This
connection is used primarily for DR_admin, but is also used when local RMA and Host Agents need to verify
connections (for example, when the RMA sap_status procedure needs to access the remote node for status
information, or if the local Fault Manager heartbeat cannot see either the remote SAP ASE or the Fault
Manager, it assumes the SAP ASE is isolated from the network and initiates a change to standby mode). SAP
ASE uses the proxy connections to fill out the information in these proxy tables in the master database:
● hadrGetLog
● hadrGetTicketHistory
● hadrStatusActivePath
● hadrStatusResource
● hadrStatusRoute
● hadrStatusSynchronization
You can troubleshoot many of the Replication Server, SAP ASE Cockpit, and HADR system issues.
Troubleshooting the HADR system often includes rectifying permission and space issues.
● Check the dev_sybdbfm Fault Manager error log for errors – To get diagnostic information, set the trace
level to 3 by adding the line ‘ha/syb/trace = 3’ to SYBHA.PFL. Restart the Fault Manager for the change
to take effect.
● Increase the trace level of SAP Host Agent services – Add the line service/trace = 3 to /usr/sap/
hostctrl/exe/host_profile.SAP Host Agent by issuing: /usr/sap/hostctrl/exe/saphostexec
-restart.
These logs will subsequently display additional information:
/usr/sap/hostctrl/exe/dev_saphostctrl
/usr/sap/hostctrl/exe/dev_sapdbctrl
Common Issues
Permissions and space issues are common problems in the HADR system.
● Verify that all HADR directories have the appropriate permissions, specifically the SAP ASE installation
directory, the Fault Manager installation and execution directories, and /tmp. The Fault Manager creates
temporary directories under /tmp and adds temporary files there. If it is unable to do so, calls to the SAP
Host Agent fail, but the Fault Manager cannot know that the call failed because it could not add the
temporary files. For this reason, verify that the user executing the Fault Manager has permissions on all the
directories.
df -k /tmp
If this command shows 100 percent usage, you may have to make room in /tmp.
● Verify that the version for GLIBC is 2.7 or later by executing:
ldd --version
● Make sure you enter the correct passwords for sa, DR_admin, and sapadm. It can be very difficult to find
the root cause of the errors when password mismatches are the culprit. By default, sapadm may not have a
password when you create it with the SAP Host Agent, but requires one for the Fault Manager. Add or
update the sapadm password using the passwd command.
● Verify that the user limits value for open files is set to an adequate number (4096 or larger) before
configuring the HADR system for large databases. Use this command to view the value for open files:
○ On the C-shell:
limit descriptors
ulimit -a
● Verify that there is sufficient amount of memory and swap space before configuring the HADR system for
large databases. In particular, materialization requires an adequate amount of memory and swap space for
large databases.
The HADR system displays an error message when it cannot add an instance. For example:
This section of the error message indicates that the system could not create a connection to the hostagent:
To resolve the issue, execute this command to check if the sapstartsrv process is running (it should be
started with the SAP Host Agent):
For example, if Host1 (the primary companion) goes down and fails over to Host2 (the secondary companion),
restart SAP ASE Host1 as a Windows service instead of using startup scripts such as the <RUN_server> file.
Rectifying errors in the HADR system often involves reviewing the error logs.
The following examples assume the configuration described in this RMA sap_set command:
sap_set
go
PROPERTY VALUE
---------------------------------------- -------------------------
maintenance_user NW7_maint
sap_sid NW7
installation_mode BS
Symptom: The connection from the primary to the companion server is suspended.
In this situation, issuing the sap_status path RMA command results in a message similar to the following
from the RMA command line, where the line in bold indicates that a server is down:
The output describes four paths: two should be active and two should be suspended. The two that should be
active are the paths that include the primary companion. In this example, the paths that start with "NW7PRI".
1. Use the Replication Server admin who_is_down command to determine which server is down (see the
Replication Server Reference Manual > Replication Server Commands). In this example, admin
who_is_down indicates that path NW7_NW7SEC.NW7 is down, which is associated with thread number
106:
admin who_is_down
Spid Name State Info
---- ---------- --------------------
------------------------------------------------------------
REP AGENT Suspended NW7_NW7SEC.master
NRM Suspended NW7_NW7SEC.master
DSI EXEC Suspended 106(1) NW7_NW7SEC.NW7
DSI EXEC Suspended 106(2) NW7_NW7SEC.NW7
DSI EXEC Suspended 106(3) NW7_NW7SEC.NW7
DSI Suspended 106 NW7_NW7SEC.NW7
REP AGENT Suspended NW7_NW7SEC.NW7
NRM Suspended NW7_NW7SEC.NW7
3. If the resume connection command fails, issue either of these commands to purge all data from the
queue that is being removed:
Using the sqm_purge_queue and resume connection … skip tran parameters can cause a data
mismatch between the standby and primary companions. They are very risky operations, and you
must perform a rematerialization once they are finished.
Tip
You can choose to run the sysadmin sqm_purge_queue command to purge queues, without
necessarily hibernating on the Replication Server. Instead, you can suspend the appropriate modules in
the Replication Server, and then purge queues as usual. Running sysadmin sqm_purge_queue with
the [, check_only] parameter facilitates this scenario, as it checks and reports if the appropriate
modules were suspended successfully (it does not purge queues), thus enabling you to make an
informed decision before purging queues. Note that you can continue to purge queues like you did
before – by hibernating on the Replication Server. For more information, see the Usage section under
SAP Replication Server Reference Manual > SAP Replication Server Commands > sysadmin
sqm_purge_queue.
○ resume connection …. skip tran to resume the connection but skip the indicated transactions.
○ sysadmin sqm_purge_queue to purge all messages from a stable queue.
4. Execute the Replication Server sysadmin hibernate_on command to enable the server hibernation
mode:
sysadmin hibernate_on
5. Execute the Replication Server sysadmin sqm_purge_queue command (the combination '106, 0',
below, tells sqm_purge_queue to operate on queue 106:0):
sysadmin sqm_purge_queue,106, 0
6. Disable hibernation:
sysadmin hibernate_off
7. Execute the Replication Server resume connection command to resume the suspended connection:
8. Execute the RMA sap_status command to verify that the suspended path is now active:
sap_status path
Path Name Value Info
--------------------- ------------- -------------------------
---------------------
Start Time 2015-04-15 03:37:34.342 Time command started executing.
Elapsed Time 00:00:01 Command execution time.
NW7PRI Hostname mo-2897e9422.mo.sap.corp Logical host name.
NW7PRI HADR Status Primary : Active Identify the primary and standby sites.
NW7SEC Hostname mo-338995c0a.mo.sap.corp Logical host name.
NW7SEC HADR Status Standby : Inactive Identify the primary and standby sites.
NW7PRI.NW7SEC.NW7 State Active Path is active and replication can occur.
NW7PRI.NW7SEC.NW7 Latency Time 2015-04-15 02:03:33.180 Time latency last
calculated
NW7PRI.NW7SEC.NW7 Latency 340 Latency (ms)
NW7PRI.NW7SEC.NW7 Commit Time 2015-04-15 03:08:13.340 Time last commit
replicated
NW7PRI.NW7SEC.master State Active Path is active and replication can occur.
An RMA command failure results in an error message similar to the following, with the error being indicated
with an Error Task, Task State Error, indicated in bold:
Action: Search the error log for a reason why the RMA command failed (all RMA commands start with "TDS
LANGUAGE"), including:
● Find the timestamp of the execution indicated by "Start Time" in the sap_failover output. In the
example, this is 'Tue Apr 07 08:32:59 UTC 2015".
● Search the RMA error log for the "TDS LANGUAGE: sap_failover" keywords near the time indicated by
the timestamp (in this case, "04-07-15 07:53:47").
Inconsistent Data
Action: Rematerialize the servers using these RMA commands (in these examples, the primary server is named
PRI, and the companion server is named STA):
sap_disable_replication PRI,NW7
2. Re-enable replication:
sap_enable_replication PRI,NW7
sap_disable_replication PRI,master
sap_enable_replication PRI,master
sap_materialize auto,PRI,STA,master
Note
The maintenance user password is changed and managed by Replication Server after you run
sap_materialize, preventing the database administrator from accessing the data of primary and
standby databases.
sap_materialize auto,PRI,STA,NW7
Caution
Use dump transaction with no_log as a last resort, and use it only once after dump transaction
with truncate_only fails.
The with truncate_only and with no_log parameters allow you to truncate a log that has become
dangerously short of free space. Neither parameter provides a means to recover transactions that have
committed since the last routine dump.
Symptom: The primary server starts as the standby server if the primary server is started before the
companion server.
Action:
use master
sp_hadr_admin primary
sp_hadr_admin activate
Symptom: Issuing the sap_status path RMA command results in a suspended path because the Replication
Agent on SAP ASE is down:
To determine which Replication Agent to restart, find the name of the primary and companion servers, the path
marked "suspended," the location where the error originates, and on which host the error originates.
● Primary and companion servers – These lines indicate that NW7PRI is the primary server and that NW7SEC
is the companion server:
NW7PRI HADR Status Primary : Active Identify the primary and standby sites
NW7SEC HADR Status Standby : Inactive Identify the primary and standby sites
● Suspended path – Paths that start with NW7PRi should be active, but sap_status path reports that
NW7PRI.NW7SEC.NW7 is "suspended."
● Error originates – This line NW7PRI.NW7SEC.NW7 indicates that the suspension error results from the
Replication Agent thread: "State Suspended Path is suspended (Replication Agent
Thread). Transactions are not being replicated."
● Host on which the error originates – This line NW7PRI.NW7SEC.NW7 indicates that the Replication Agent
thread on host mo-f9bb75e82 is stopped: Additional Info Additional Info The REPLICATION
AGENT connection in the Replication Server on the mo-f9bb75e82 host to
'NW7_NW7PRI_R2.NW7' is suspended.
To resolve this issue, restart the Replication Agent running on host mo-f9bb75e82:
sp_start_rep_agent NW7
go
Replication Agent thread is started for database 'NW7'.
(return status = 0)
Symptom: after using sap_disable_replication to disable the replication and then restarting the primary
SAP ASE server, the primary SAP ASE server cannot be activated any more and the user applications fail to
connect to the primary databases.
sap_disable_replication only stops the RepAgent, but does not disable the RepAgent and set the second
truncation point (STP) to end. However, SAP ASE cannot be activated if one RepAgent is enabled and the STP
is not valid.
Action: either set the STP to end manually or disable the RepAgent manually on the primary SAP ASE.
● To set the STP to end manually and then activate the primary SAP ASE:
use master
go
dbcc settrunc('ltm','end')
go
use ERP
go
dbcc settrunc('ltm','end')
go
use master
go
sp_hadr_admin activate
go
use master
go
dbcc settrunc('ltm','ignore')
go
use ERP
go
dbcc settrunc('ltm','ignore')
go
● To disable the RepAgent and then activate the primary SAP ASE:
sp_config_rep_agent master,disable
go
sp_config_rep_agent 'ERP',disable
go
sp_hadr_admin activate
go
If SAP Replication Server is unavailable during an SAP ASE startup after an unplanned failover, use SAP ASE
commands to recover a database that is enabled for synchronous replication, and make it accessible online.
Context
If the replication mode is synchronous for the primary data server and SAP Replication Server is unavailable
during SAP ASE startup after an unplanned failover, SAP ASE cannot recover the original primary data server
and make it assume the role of a standby data server, since SAP ASE cannot connect to SAP Replication Server
to obtain information about the last transaction that arrived at SAP Replication Server. For example, you see
the following if the database name is D01 and <dbid> represents the database ID, in the SAP ASE error log:
Error: 9696, Severity: 17, State: 1
Recovery failed to connect to the SAP Replication Server to get the last oqid for
database 'D01'.
Database 'D01' (dbid <dbid>): Recovery failed.
Check the ASE errorlog for further information as to the cause.
Procedure
1. Check the SAP ASE error log to see if the latest attempt to connect to the SAP Replication Server failed.
2. Verify that the original primary database has not been recovered.
For example, if the database name is D01, log in to isql and enter:
use D01
go
4. In SAP ASE, enable trace flag 3604 to log all events and any errors that occur during database recovery:
dbcc traceon(3604)
go
dbcc dbrecover(D01)
go
The recovery is successful and the database is accessible online if you see the events logged by the trace
flag ending with:
use D01
go
You can use the dataserver --recover-syncrep-no-connect parameter to restart the primary SAP ASE
data server without synchronization to SAP Replication Server if you cannot restart SAP Replication Server
during an unplanned failover.
Context
During failover of SAP ASE from primary to standby, SAP ASE needs to connect to SAP Replication Server to
query the last transaction it received from the primary ASE dataserver. When this connection is not possible,
the following error is logged in the ASE error log, and the replicated database(s) are not recovered:
Error 9696: "Recovery failed to get the last oqid for database '<name>' from SAP
Replication Server because it was either unable to connect or it received an
error".
The --recover-syncrep-no-connect parameter starts SAP ASE and tries to connect to the SAP
Replication Server during recovery. If the connection attempts to SAP Replication Server fail, error 9696 is not
invoked. SAP ASE recovers the databases, but the primary and standby databases may not be synchronized.
Without synchronized replication between the databases and SAP Replication Server, you cannot recover from
an unplanned failover with the assurance of no data loss that synchronized replication provides.
Procedure
...
dataserver --recover-syncrep-no-connect
...
Troubleshooting a failed installed involves rectifying SAP installer, recovering from a failed setup, and
performing a teardown.
Troubleshooting the SAP installer often includes rectifying the Replication Server configuration and
materialization
Perform these steps if the SAP installer encounters an error when configuring the Disaster Recovery
environment:
1. Check the SAP installation log for errors; it shows errors from the SAP installer perspective.
2. If the error cannot be resolved, check the RMA log, located at $SYBASE/DM/RMA-16_0/instances/
AgentContainer/logs/RMA_*.log on the primary Replication Server machine.
3. Examine SAP ASE and Replication Server log files for installation errors.
After the error is resolved, retry the configuration.
If the SAP installer encounters an error during materialization, and you determine after investigating the logs
that the database being materialized is in use:
1. Log in to the standby SAP ASE and issue sp_who to determine which processes are using the database.
2. For any existing processes, have its associated user log off the server to remove the process. As a last
resort, use the kill command to remove the process.
Materialization Fails
If the SAP installer encounters an error during materialization, perform the following if you determine after
investigating the logs that you must perform materialization again (you must first reset replication).
When you resolve the error, retry the configuration by clicking the Retry button in the SAP installer.
For more information about RMA commands, see the RMA Configuration and User Guide or issue sap_help at
the command line.
Perform these steps if the SAP installer encounters an error during materialization and after investigating the
logs you determine the net password encryption required is out of sync.
You can recover from failed setups from the SAP installer and from the setuphadr utility.
Perform tasks on the first and second site to recover from a failed setup.
1. If the HADR setup failed, click Next to complete the installation. The installer starts RMA.
2. Check the setuphadr utility log file, located in $SYBASE/ASE-16_0/init/logs for the cause of failure,
and correct it.
3. Enter the passwords in setuphadr utility response file, located in $SYBASE/ASE-16_0/setuphadr.rs.
4. Execute this command to finish the setup using the setuphadr utility:
setuphadr $SYBASE/ASE-16_0/setuphadr.rs
Note
1. If the HADR setup failed, click Next to complete the installation. The installer starts RMA.
2. Check the setuphadr utility log file in $SYBASE/ASE-16_0/init/logs and the RMA log in
$SYBASE/DM/RMA-16_0/instances/AgentContainer/logs directories to find out why the setup
failed, and make any required corrections.
3. Perform the following depending on when the HADR setup failed:
○ If HADR setup failed before the Setup Replication task in setuphadr utility log file:
1. Enter the passwords in setuphadr utility response file, located in $SYBASE/ASE-16_0/
setuphadr.rs.
2. Issue this command to finish the setup using the setuphadr utility:
setuphadr $SYBASE/ASE-16_0/setuphadr.rs
○ If HADR setup failed during, or after, the Setup Replication task in setuphadr utility log file:
1. Perform the teardown with the instructions in Troubleshooting the Replication System [page 408]
> Recovering Replication Server.
2. If $SYBASE/ASE-16_0/setuphadr.rs does not exist on first site, copy it from second site, then:
○ Enter the passwords
○ Set is_secondary_site_setup property to false
○ Set the value of the setup_site property to the first site
3. On the first site, run setuphadr utility with the edited setuphadr.rs responses file:
setuphadr $SYBASE/ASE-16_0/setuphadr.rs
setuphadr $SYBASE/ASE-16_0/setuphadr.rs
How you recover from failed setuphadr setup depends on when the failure occurred.
1. Check the setuphadr utility log file in $SYBASE/ASE-16_0/init/logs and the RMA log in
$SYBASE/DM/RMA-16_0/instances/AgentContainer/logs directories for the reason the setup
failed, and make any required corrections.
2. If the HADR setup failed before the Setup Replication task in setuphadr utility log file, rerun
setuphadr on the current site.
Remove any existing RMA service before you rerun setuphadr on Windows 64-bit.
3. If the HADR setup failed during, or after, the Setup Replication task in setuphadr utility log file:
1. Perform a teardown according to the instructions in Performing a Teardown [page 401].
2. Rerun the setuphadr utility on first site.
3. Rerun the setuphadr utility on the second site.
.
The steps described in this section require you to issue the sap_teardown command, which automatically
performs these tasks:
● Stops the Replication Server and deletes its instance directory, partition files, and simple persistent queue
directories, and kills all Replication Server related processes.
● Deactivates the primary SAP ASE, then changes its mode to standby, if the source host (the machine on
which SAP ASE runs) is available.
● Drops all servers from the HADR server list on both SAP ASE servers.
● Drops the HADR group from both servers.
● Disables HADR on both servers.
● Disables CIS RPC Handling.
Note
● The sap_teardown command does not drop the logins for the administrator or maintenance user.
Drop and re-create these logins after running sap_teardown.
● Clean up the SPQ directories on each host after running sap_teardown, otherwise you may encounter
errors when re-creating the HADR system.
Tearing down a replication environment includes disabling replication in the SAP ASE servers, stopping the SAP
Replication Servers, and deleting some directories and files created during setup, including the SAP Replication
Server instances.
After the teardown is complete, the system is no longer an HADR system. The SAP ASE is left running after the
teardown and should be treated like a regular, SMP server.
Use the sap_teardown command to tear down the replication environment. The command does not modify
any data that has been replicated to the standby databases. Additionally, the databases on both the primary
and standby hosts are not unmarked for replication. The command does not remove any software, but it does
remove the SAP Replication Servers and configurations that support replication. Executing sap_teardown:
The primary and standby dump directories are not deleted during teardown. The dump directories are defined
using sap_set and setting the db_dump_dir property. These directories can get very large depending on the
amount of data materialized. It is the responsibility of the user to maintain these directories.
The primary and standby device directories are not deleted during teardown. These dump directories are
defined using sap_set and setting property, device_buffer_dir.
2. Execute:
sap_teardown
1. Log into the primary and standby SAP ASE servers and remove the HADR proxy tables:
use master
go
drop table hadrGetTicketHistory
go
drop table hadrGetLog
go
drop table hadrStatusPath
go
drop table hadrStatusResource
go
drop table hadrStatusRoute
go
2. Log into the primary and standby SAP ASE servers and remove these Replication Server system objects
from the master and participating databases:
3. Log into the primary server to remove and disable HADR member information:
4. Log into the standby server to remove and disable HADR member information:
$SYBASE/DM/RMA-16_0/instances/AgentContainer/configdb/*
$SYBASE/DM/RMA-16_0/instances/AgentContainer/backups/*
If HADR mode is enabled and there is no running RMA instance, removehadr performs the following:
Execute the sap_teardown command before running the removehadr utility or else the utility logs on to the
RMA server and finishes the teardown process.
The removehadr.sh (for Linux) or removehadr.cmd (for Windows) files as well as removehadr.jar are
present in the $Sybase/RMA-16_0/bin directory.
Where:
● <res_file> - is the path of the resource file that stores the HADR installation information.
● <sa_username> - is the sa username used to connect to SAP ASE and RMA.
● <sa_password> - is the sa login password used to connect to SAP ASE and RMA.
● <DR_admin_password> - is the DR admin password used to connect to DR and RMA.
● <server_name> - is the SAP ASE server name that is used to connect.
● <interface_file> - is the interface file used for connection.
Examples:
2. Example 2
This example removes the HADR environment with the given *.rs file. The resource file here refers to the
same .res file setuphadr used to set up the HADR environment:
removehadr.sh -R setup.rs
3. Example 3
This example removes the HADR environment by using the SAP ASE sa login:
SA user password:
sybase
Executing ASE Command: 'use master'
Executing ASE Command: 'sp_configure 'HADR mode''
Executing ASE Command: 'use master'
Executing ASE Command: 'sp_role 'grant',replication_role,sa'
Executing RMA Command: 'sap_teardown'
Shutting down RMA instance.
Dropping DR_admin user....
4. Example 4
This example removes the HADR environment by using the sa login, with an interfaces file and a server
logical name:
11.6 Monitoring
You can monitor the health of the Replication Server in the HADR system.
Latency is measured as the time the commit was received at the primary server until the time the commit was
seen on the standby server.
Check the health of replication using the Replication Server admin command:
admin who_is_down
admin who,sqm
admin disk_space
admin stats,mem_in_use
admin stats,max_mem_use
● View details of the memory used and maximum amount of memory used:
admin stats,mem_detail_stats
There are a number of monitoring tables for monitoring the Replication Agent
Including:
● monRepCoordinator
● monRepLogActivity
See the SAP ASE Reference Manual: Tables > Monitoring Tables and Performance and Tuning Series: Monitoring
Tables for information about using these monitoring tables.
RegAgent has a syslogs scanner and a sysimrslogs scanner if in-memory row storage (IMRS) is enabled on a
database.
● monRepLogActivity
● monRepMemoryStatistics
● monRepScanners
● monRepScannersTotalTime
● monRepSchemaCache
● monRepStreamStatistics
For an IMRS database, the sysimrslogs scanner is enabled by default. You can turn on the trace flag 9126 to
disable it if necessary.
See the Reference Manual: Tables > Monitoring Tables for more information about these tables.
Troubleshoot Replication Server issues to rectify replication, RMA, and unreplicated data issues.
There are a number of steps you can perform to troubleshoot the replication system.
After you enable replication for a primary connection, executing the admin who command from the isql
prompt on Replication Server displays the status of the Synchronous Replication components. For example:
Where:
● CAP – the Capture component reads stream replication packages from the simple persistent queue (SPQ).
It translates stream replication commands into Replication Server commands and writes them into
inbound queue. CAP is one of:
○ Awaiting Command – waiting for message from the SPQ reader.
○ Active – processing a package from the SPQ.
○ Down – Capture has failed and shut down, or Replication Server is in hibernation mode.
○ Suspended – suspended.
● REP AGENT CI - the Replication Agent or log transfer components. REP AGENT CI is one of:
○ Active – Replication Agent is connected.
○ Down – Replication Agent is not connected.
○ Suspended – Log transfer is suspended.
● SPQ WRITER – the inbound stream replication connection. It receives messages from Replication Agent
and writes them into the SPQ. SPQ WRITER is one of:
○ QWait – waiting due to a full writer queue.
○ Dup – detects a duplicate message.
○ Writing – currently writing a message to the SPQ file.
○ Ready – waiting for incoming message from Replication Agent.
○ Down – inactive because Replication Agent is down.
● SPQ READER - the outbound stream replication connection. It reads stream replication packages from SPQ
and sends them to Capture. SPQ READER is one of:
○ QWait – waiting due to a full reader queue.
○ NCWait – detects NC commands in SPQ.
Use the admin who from the isql prompt on Replication Server to check the status of the Replication Agent
(displayed as REP AGENT CI in the output). The status is Active if the Replication Agent is connected.
When Replication Agent connects to Replication Server, it includes messages similar the following in the
Replication Server log file:
If Replication Server indicates that Replication Agent is running on the primary database but is not connected,
use sp_help_rep_agent <database_name>, process to check the status of the Replication Agent
process. Connect to the SAP ASE acting as the primary node and execute:
sp_help_rep_agent pdb,process
go
Replication Agent Coordinator Process Status
dbname spid sleep_status state
------ ---- ------------ --------
pdb 58 sleeping sleeping
(1 row affected)
See the Replication Server Reference Manual > SAP ASE Commands and System Procedures for information
about sp_help_rep_agent.
Issue sp_who on SAP ASE to determine if the Replication Agent is running on the primary SAP ASE. Connect to
the SAP ASE acting as the primary node and execute:
sp_who
go
fid spid status loginame origname hostname blk_spid dbname tempdbname
cmd block_xloid threadpool
--- ---- ---------- -------- -------- ---------- -------- ------ ----------
-------------------- ----------- ----------------
::::::::::::::::::::::::::::::
0 58 background NULL NULL NULL 0 pdb tempdb REP
AGENT 0 NULL
0 59 background NULL NULL NULL 0 pdb tempdb REP
AGENT CI STPMGR 0 NULL
0 60 background NULL NULL NULL 0 pdb tempdb REP
AGENT CI SCANNER
Use trace information to collect information about the stream replication libraries.
● SPQ_TRACE_PACKAGE – dumps information about every stream replication package processed (read or
write) in the SPQ.
● SPQ_TRACE_DISPATCHER – logs package dispatcher activities in the SPQ.
● SPQ_IGNORE_TRUNCATION – discards truncation point movement requests.
Use trace information to collect information about the capture module. Enable the traces with this syntax:
A message is written to the Replication Server error log file when Capture fails to parse a command and shuts
down.
Capture fails with a bad row buffer. Row buffer length=%d, Column name=%s,
Column type=%d, Max column length=%d, Column length=%d, Column offset=%d.
Capture receives a corrupted row buffer. Row buffer dump(%d bytes): <hex dump>
Action – Enable Capture trace to collect more information and contact an SAP Replication Server
administrator (see the following section).
Capture issues an error message to the Replication Server error log file when it fails and shuts down.
Action – Use the sysadmin Replication Server command from the isql prompt to collect more information
and contact an SAP Replication Server administrator. The syntax is:
The sysadmin command dumps table schemas to the schema cache or the Replication Server System
Database (RSSD). The Replication Agent sends table schema when a row of the table is processed for the first
time. See the Replication Server Reference Manual.
Capture adds these schema to a cache and persists them in the RSSD to parse the SAP ASE raw row buffer.
When parsing a command, Capture gets the schema it requires from the cache. However, if it is not in the
cache, it is loaded from the RSSD.
Before you purge the SPQ, suspend the log transfer and capture, or make sure Replication Server is in
hibernation mode.
To purge the SPQ, from the Replication Server's isql command line:
RMA displays an error similar to the following when an issued sap_set_host command cannot connect to
another agent (see bold text):
---------- -----------------------------
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
-----------------------------------
Set Host Start Time Thu Jun 23 02:13:06 EDT
2016
SetHost Hostname
site0
1. Log in to the remote RMA and run sap_AgentInfo to review agent connection information to use in a
subsequent step to alter the agent connection:
rsge.bootstrap.debug=true
The RMA displays information about its starting address, hostname resolution, and binding information.
5. Add this information specifying the IP address to the $SYBASE/DM/RMA-16_0/instances/
AgentContainer/config/bootstrap.prop file:
java.rmi.server.hostname=<IP_Address>
Unplanned failovers may result in lost data (even in synchronous mode), and you may need to rematerialize the
databases to resynchronize the data.
When you execute sap_status after executing sap_failover, the RMA produces messages that contain the
phrases: "Additional Info 2" or "Corrective Action". Messages that do not include the "Corrective
Action" phrase means that you need not rematerialize the database. The steps provided by the "Corrective
Action" describe the steps you need to perform to rematerialize the databases.
● If Replication Server is configured for, and is running in, synchronous mode – RMA produces this message:
In this case, rematerialization of the database is not necessary. However, for the two cases below, you may
need to rematerialize the databases.
● If Replication Server is configured for, but is not running in, synchronous mode – RMA produces this
message:
● If the version of Replication Server does not support synchronous mode – RMA produces this message:
There are a number of situations for which you need to recover and rebuild the HADR system.
If the standby server reports that the database error log is full, the replication path is likely broken. In this case,
check the log for errors. Fix any replication issues you find in the log. If there are none, you may need to
increase the log on the standby server.
To recover the HADR system when SAP ASE reports that the database log is full:
● Disable the Replication Server from the HADR system by logging in to the RMA and running:
sap_disable_replication Primary_Logical_Site_Name
After you disable replication, the HADR system preserves the mode and state so that all HADR-aware
applications continue to log into the primary companion to execute their work.
Synchronize the standby and primary companions by rematerializing the master and CID databases (the
second TP is removed from the primary companion for the master and CID databases):
1. Enable replication:
sap_enable_replication <Primary_Logical_Site_Name>
Note
The maintenance user password is changed and managed by Replication Server after you run
sap_materialize, preventing the database administrator from accessing the data of primary
and standby databases.
3. To manually materialize the CID database, disable the automatic database dump and backup process,
and verify a dump is not currently running:
sap_teardown
sap_teardown disables the HADR system on the primary and companion servers. After the teardown is
finished, the application server can connect to any SAP ASE server because it is running in standalone
mode. The Replication Server is shut down and is available to reconfigure.
Note
The sap_teardown command does not drop the logins for the administrator or maintenance user.
Drop and recreate these logins after running sap_teardown.
The RMA supports the Replication Server hidden maintenance user password.
Replication Server periodically changes the maintenance user password. After executing sap_teardown, you
may need to reset the SAP ASE maintenance user password in both servers before configuring a new
replication environment.
Use sp_password to reset the SAP ASE maintenance user password. For example:
The RMA assumes that the SAP ASE and Replication servers running on a logical host use the same network
domain. If they do not, the HADR setup fails during the ERP database materialization and issues an error
stating that SAP ASE could not communicate with a remote SAP ASE.
Use sp_hadr_admin addserver to manually change the network name in SAP ASE. Use sap_set to verify
the name change after you have added all the hosts using sap_set_host. Check the rs_hostname and
ase_hostname for each logical hostname to confirm the fully qualified domain names use the same suffixes.
The RMA cannot validate the dump directory’s location or permissions when SAP ASE and Replication Server
are located on separate host computers.
.sqlanywhere16 Directory
Replication Server uses an SQL Anywhere database to host the embedded RSSD.
A directory named .sqlanywhere16 is created in the operating system user’s home directory when you
create this database. The SQL Anywhere-embedded RSSD continues to function correctly if
the .sqlanywhere16 directory is accidentally deleted, and SQL Anywhere writes the directory in another
location if the home directory does not exist.
However, SQL Anywhere also creates another directory to store temporary files. If you set the <SATMP>
environment variable, SQL Anywhere uses this location to store its temporary file. To set <SATMP> in the:
If <SATMP> not set, SQL Anywhere uses the value specified by the <TMP> environment variable for the location
of the temporary files. If <TMP> is not set, it uses the value specified by <TMPDIR>. If <TMPDIR> is not set, it
uses the value specified by <TEMP>. If none of these environment variables are set, the temporary files are
created in /tmp.
You can quickly determine the version of RMA from the executable using the -v or -version arguments.
Data may not replicate because the inbound or the outbound queues are full.
Symptom: The inbound queue (IBQ) is reported as full because the downstream components (DIST or DSI) are
suspended, or they are shut down due to another issue. Replication Server displays a message similar to this in
the log, indicating the partition is full:
Recovery Procedure: Check the data accumulated in the IBQ. After fixing the issues, resume the suspended
component or restart the failed component.
Symptom: The IBQ is reported as full because the downstream components (Distributor or DSI) cannot keep
up with the upstream components (Capture or Replication Agent), and as a result, data accumulates in the
IBQ. The Replication Server error log includes one or more message similar to this, indicating the current
status of the SRS partition:
In this situation:
● The admin disk_space command indicates that the Replication Server partition is full.
● The admin who command indicates that all components on the path are running.
● The admin who, sqm command indicates there is backlog in the IBQ.
Recovery Procedure:
● If the Replication Server’s partition is too small (for example, the partition size is 100 MB or 1 GB), issue a
command similar to this to add more space:
● If the IBQ is reported as full due to poor DIST performance, consider tuning the Distributor so that it can
keep up with upstream Capture and Replication Agent components. In this situation, the admin who,
sqm command indicates there is little or no backlog in the outbound queue (OBQ).
● If the IBQ is reported as full due to poor DSI performance, tune the DSI to keep up with Distributor,
Capture, and Replication Agent. In this situation, admin who, sqm indicates a backlog in the IBQ and
OBQ.
Symptom: The IBQ is reported as full due to open transactions. In this situation, when the open transaction is
discovered, the IBQ cannot be truncated and eventually fills up. The Replication Server error log includes a
message similar to this, indicating the partition is full:
● The admin who command indicates that all the components on the path are running.
● The admin disk_space command indicates that the Replication Server partition is full.
● The admin who, sqm command indicates backlog in the IBQ.
● The admin who, sqt command indicates open transaction in the IBQ.
● If the open transaction occurs because Replication Agent disconnects and then reconnects without
sending a purge open command, purge the open transactions by issuing:
Symptom: The outbound queue (OBQ) is reported as full because DSI is suspended, or is down and issues a
message similar to:
Recovery Procedure: After fixing the issues, resume the suspended DSI or restart the failed DSI.
Symptom: The OBQ is reported as full because the DSI cannot keep up with the Distributor, Capture or
Replication Agent upstream components, and as a result, data accumulates in the OBQ. The Replication
Server’s log message indicates that the partition is exhausted:
In this situation:
● The admin who command indicates that all the components on the path are running
● The admin disk_space command indicates that the Replication Server partition is full
● The admin who, sqm command indicates backlog in the OBQ
Recovery Procedure:
● If the Replication Server’s partition is too small (for example, the partition size is 100 MB or 1 GB), issue a
command similar to this to add more space:
● If the OBQ is reported as full due to poor DSI performance, consider tuning the DSI component so that it
can keep up with upstream Distributor, Capture, and Replication Agent components.
SPQ is Full
Symptom: The SPQ is reported as full because the downstream Capture, DIST, or DSI components are
suspended, or they are shut down due to another issue. In this situation, the Replication Agent stops sending
messages to the Replication Server, and replication eventually stops. The Replication Server error log indicates
the current SPQ status:
Recovery Procedure: Either disable replication or enlarge the SPQ (see the next recovery procedure). Use this
syntax to disable replication:
Symptom: The SPQ is reported as full because the downstream component cannot keep up with the
Replication Agent, resulting in data accumulating in the SPQ. The Replication Server error log includes one or
more messages similar to this, indicating the current status of the SPQ:
Recovery Procedure:
● If the SPQ is configured too small (that is, the value of spq_max_size is significantly less), you may need
to increase the size of SPQ. For example, if the maximum size is 100 MB or 1 GB, increase the SPQ size by
issuing:
● If the SPQ is reported as full because of any issue related to Capture, you can tune the Capture so that it
can keep up with Replication Agent. In this situation, the admin who, sqm command indicates that there
is little or no backlog in the IBQ and OBQ:
● If the SPQ is reported as full due to poor DSI performance, you can tune the DSI to keep up with the
upstream components. In this situation,
○ The admin who, sqm command indicates that the IBQ or OBQ contains a lot of backlog.
A situation in which the transaction log of the primary SAP ASE continues to grow but issuing a dump
transaction does not free up space may indicate that the transaction log needs more space or that the
secondary truncation point does not move
SAP ASE uses truncation points to ensure that only transactions processed by the Replication Agent are
truncated. A secondary truncation point marks the place in the primary database log up to which the
Replication Agent has processed transactions. The Replication Agent periodically updates the secondary
truncation point to reflect transactions successfully passed to the Replication Server. SAP ASE does not
truncate the log past the secondary truncation point. See the Troubleshooting Guide for more information
about truncation points.
To determine if the secondary truncation point does not move, connect to the SAP ASE acting as primary
node , and execute a select statement from syslogshold at regular intervals. For example:
In this output:
● The row that includes the name equal to $replication_truncation_point displays data related to the
secondary truncation point.
● The page column contains the page number to which the secondary truncation point is pointing.
If the page value does not change during multiple executions of select * from master..syslogshold,
check if you have a long-running transaction exhausting the transaction log. If you do not, check if Replication
Server is running and that Replication Agent is connected to it. If the system is replicating, connect to
Replication Server and issue admin who.
If the status of the downSPQ READER has a value of NCWait, this may indicate that the secondary truncation
point cannot be moved because there are NC (non-confirmed) transactions. To verify this, create a dummy
table, mark it for replication, insert data into the table, and issue select * from master..syslogshold to
see whether the secondary truncation point moves.
Use the rs_ticket Replication Server stored procedure to troubleshoot performance issues.
To start troubleshooting performance issues, execute an rs_ticket 'begin' command from the primary
server at any stage of your workload, letting the workload continue normally. Allow the rs_ticket command
to run for about 60 minutes, then issue rs_ticket 'end'. The rs_ticket command flows through the
Replication Server.
● If both the tickets have reached the rs_ticket_history table, you can calculate the time taken to reach
the end component by subtracting the begin value from the end value. In this example, the begin value is
08/29/15 15:08:53.243, and the end value is 08/29/15 15:18:53.363, so the time required was
approximately 10 minutes:
V=2;H1=begin;PDB(HA1)=08/29/15 15:08:53.243;RA(HA1)=08/29/15
15:08:53:243;EXEC(51)=08/29/15
15:18:53.374;B(51)=34770028920;DIST(11)=08/29/15
15:18:53.623;DSI(75)=08/29/15
15:18:53.907;DSI_T=8143963;DSI_C=426597987;RRS=HA1_REP_hasite2
V=2;H1=end;PDB(HA1)=08/29/15 15:18:53.363;RA(HA1)=08/29/15
15:18:53:363;EXEC(52)=08/30/15
00:00:01.619;B(52)=34770028920;DIST(11)=08/30/15
00:00:01.872;DSI(75)=08/30/15
00:00:02.165;DSI_T=8143964;DSI_C=426597990;RRS=HA1_REP_hasite2
● If a component is taking a lot of time and not catching up with other components, identify the component
causing the bottleneck by subtracting the end value from the begin value for each component, and then
retune the component.
Additionally, you can frequently check the Replication Server error log for a "memory limit exceed" message,
indicating that Replication Server has reached its memory limit. If you see this error message, you may need to
increase the value for the Replication Server memory_limit configuration parameter.
Note
By default, Replication Server attempts to manage the memory as much as possible without any manual
intervention.
11.9 Failover
sap_status shows the reason for failure and the corrective actions. View the log files of RMA, SAP ASE, and
Replication Server to see additional details of the failures.
Note
The Fault Manager triggers an unplanned failover by connecting to the RMA on the companion node and
issuing sap_failover. Begin troubleshooting by first looking into the Fault Manager log and issuing
sap_status on the RMA on the companion node. The steps to rectify an unplanned failover are identical to
those of a planned failover.
The RMA rotates the log file each day. Log files from earlier days are in the same directory.
By default, the SAP ASE log file for the primary and standby server are located in $SYBASE/$SYBASE_ASE/
install/<server_name.log>.
A planned failover executes these steps to switch the activity to the new primary site:
● Deactivating the primary SAP ASE and waiting for the backlog to be drained.
● Ensuring that all data has been applied to the standby site.
● Reconfiguring the system, changing the replication direction.
● Activating the new SAP ASE as the primary server.
sap_failover is an asynchronous operation. Use sap_status 'task' to check the status of the
sap_failover command.
If sap_failover fails, use the sap_status 'task' parameter to see the reason for the failure and any
verification you can perform for your system.
Example 1
In this example, a planned failover failed during the deactivation step because the transaction log of the source
SAP ASE was not drained during the time provided by the sap_failover command:
In this situation, check if all Replication Agents are running on the databases participating in the HADR system
by executing sp_help_rep_agent scan to check the progress in the transaction log, or using the
sp_help_rep_agent scan_verbose parameter to determine the number of pages pending for Replication
Agent to process before reaching the end of the log. For example:
In this example, the sap_status 'task' parameter shows that the validation that checks if all data was
applied to the standby site has failed:
In this situation, sap_status 'task' suggests checking whether all connections to Replication Server are
running.
In case of failure, check the SAP ASE and Replication Server logs for more information.
Example 3
In this example, the sap_status command indicates that there is a Replication Agent instance enabled on the
database in the standby host, which is not in a normal state:
TASKNAME TYPE
VALUE
---------- -----------------------------
---------------------------------------------------------------------------------
-------------------------------------------------------------------
Status Start Time Thu Apr 04 22:58:59 PDT
2019
Status Elapsed Time
00:00:03
Failover Task Name
Failover
Failover Task State
Error
Failover Short Description Failover makes the current standby ASE
as the primary server.
Failover Long Description Verifying logical host 'xxxxx' has been
made available.
Failover Failing Command Error Message Logical host 'xxxxx' has not been made
available.Rep Agent is still enabled.
Failover Additional Info 2 The primary Replication Server
'xxxxx:xxxx' is configured for synchronization mode and was found running in
synchronization mode.
Failover Corrective Action Run command 'sap_host_available
xxxxxxxx' to make logical host available. Afterwards run command 'sap_failover
xxxxx, xxxxxx, 300, unplanned' again.
Failover Task Start Thu Apr 04 22:58:59 PDT
2019
Failover Task End Thu Apr 04 22:59:02 PDT
2019
Failover Hostname xxxxxxxxxxxxxx
By default, SAP ASE running in standby mode redirects the client login to the primary SAP ASE if the client
login does not have the allow hadr privilege permission granted.
To check if the login has this privilege, connect to the primary SAP ASE and run:
To show which roles have the allow hadr privilege permission, run:
Except the roles displayed in the output of the previous command, the following role and permissions also have
the allow hadr privilege permission:
● js_admin_role
● manage hadr privilge
● manage security permissions
The SAP ASE log contains information if it cannot redirect the login to the primary SAP ASE. In this situation,
the login fails because it is an unprivileged client connection.
Replication Agent running on stream replication consists of three processes: the coordinator, the scanner, and
the truncation point manager.
Replication Agent messages are indicated by RAT-CI, and client interface messages are indicated with CI-
Info.
For example:
Rep Agent on database 'pdb' switched from mode 'async' to mode 'sync' because
scanner reached end of log.
● Replication Agent has opened the stream to replicate and the client interface native thread has opened the
CT-lib connection to Replication Server (that is, the channel is ready to replicate):
● These messages show the number of pages before the client interface reaches the end of the log, at which
point Replication Agent will switch to the configured mode:
Shutdown Messages
The Coordinator drives the shutdown process: the scanner closes the stream, stops the STPMGr, then stops
the Coordinator.
During shutdown, Replication Agent switches to asynchronous mode to communicate to Replication Server.
During stream replication, packages are composed of several commands. Every package may contain
metadata (schema) and commands. Enable trace flag 9229 to see what Replication Agent is sending. For
example:
The rows that contain the string "Metadata" (in bold) indicate that metadata is being added to the package.
For example, execute this:
There is no need to include the schema for object t1 in this package because this schema was already sent in a
previous package.
Replication Agent generates a unique identifier for every log record to be replicated. This identifier is internally
named OQID (origin queue ID). This value is used internally to indicate whether a message was already sent.
OQIDs look similar to this in the log files:
Problems in the Fault Manager typically occur when it takes actions on participating nodes.
Since the Host Agent is responsible for executing all local actions, it is useful to understand how to
troubleshoot it to resolve issues for the Fault Manager.
On hosts running the primary or standby servers, the Fault Manager heartbeat log file, named dev_hbeat, may
grow very large (may be gigabytes in size), causing the host's root (/) partition to fill, and the asehostctrl to
fail. Check the size of the file with this command to determine if a failure is caused by dev_hbeat growing too
large:
You should set the trace to its highest level on the SAP Host Agent and the Fault Manager so the error log
output is as verbose as possible.
● For the SAP Host Agent – Increasing the trace level on the SAP Host Agent requires you to set the trace
level in the profile file and restart the SAP Host Agent for the change to take effect. The profile file is located
in:
○ (UNIX) /usr/sap/hostctrl/exe/host_profile
○ (Windows) %ProgramFiles%\SAP\hostctrl\exe\host_profile
1. Add this line to the profile file:
service/trace = 3
/usr/sap/hostctrl/exe/saphostexec -restart
● For the Fault Manager – Increasing the trace level on the Fault Manager requires you to set the trace level in
the profile file and restart the Fault Manager for the change to take effect. Increasing the trace level
increases the number of log entries, and may increase the file size. The profile file is named SYBHA.PFL,
and is located in the installation directory of the Fault Manager on all platforms:
1. Add this line to the profile file:
ha/syb/trace = 3
<Fault_Manager_install_dir>/FaultManager/sybdbfm_<CID>
The Fault Manager makes these calls through the SAP Host Agent:
● LiveDatabaseUpdate – used to take an action (for example, restarting components, initiating failovers,
and so on). LiveDatabaseUpdate takes a <TASK> argument with it, which defines the action to perform.
<TASK> is one of:
○ HEARTBEAT_STARTUP
○ HEARTBEAT_CHECK
○ HEARTBEAT_STOP
○ REPLICATION_STATUS
○ RS_STANDBY_AVAILABLE
○ RESUME_REPLICATION
○ HA_VERSION HB_VERSION
○ ADD_ASE_INSTANCE
○ SET_USER_PASSWORD
○ FAILOVER_UNPLANNED
● GetDatabaseStatus – Used to view the status of components. However, no action is taken on any
component.
● StartDatabase
● StopDatabase
Hostagent Timeout
If the following messages are displayed frequently in the SAP ASE Cockpit, they indicate that sapdbctrl calls
from the Fault Manager are timing out; you may need to increase the configured timeout value for SAP dbctrl:
Primary SAP Host ({Primary_Site}) Agent cannot be contacted Primary SAP Host
({Primary_Site}) Agent
contact is restored
Secondary SAP Host ({Secondary_Site}) Agent cannot be contacted Secondary SAP
Host ({Secondary_Site}) Agent
contact is restored
To resolve this, increase the timeout period for sapdbctrl by increasing the values for the ha/syb/
dbctrl_timeout parameter in the Fault Manager profile file. The default value is 30.
Check the SAP Host Agent log file on the respective host if Fault Manager calls to the SAP Host Control fail. The
SAP Host Control log file is located in the /usr/sap/hostctrl/work/dev_sapdbctrl file. See the SAP Host
Agent Troubleshooting Guide at:http://scn.sap.com/docs/DOC-34217 for more information.
● Symptom – While using stop command to shut down the Fault Manager, you see this message:
This message occurs because the Fault Manager exits and displays the currently running mode if it is
unable to stop.
● Action – Re-execute the stop command.
Caution
Never stop the Fault Manager using the kill -9 operating system command.
● Symptom – The status of the primary and companion HADR nodes is healthy, but the sanity report
displays the ‘replication status’ as one of following:
○ Suspended
○ Dead
○ Unusable
○ Indoubt
○ Unknown
● Action – Consult the Replication Server error logs for information.
● Symptom – You see a message similar to this when running any Fault Manager sybdbfm command:
You are not running the sybdbfm command from the directory where the profile file and other Fault
Manager-generated files (such as sp_sybdbfm, stat_sybdbfm, and so on) are located.
● Action – Rerun the sybdbfm command from the directory where these files are located.
● Symptom – You see a message similar to this when issuing sybdbfm status:
Action – This is not a fatal problem. This message is displayed when the Fault Manager cannot
communicate with the SAP ASE Cockpit on its configured TDS ports (the default port number is 4998).
This message is typically displayed when the wrong TDS port number is entered, or when the SAP ASE
Cockpit is not running. The Fault Manager continues to run, but does not send out notifications.
If the SAP ASE Cockpit is not running, restart if with:
$SYBASE/COCKPIT-4/bin/cockpit.sh -stop
$SYBASE/COCKPIT-4/bin/cockpit.sh -start
2014 11/18 21:18:01.116 executing: asehostctrl -host <ASE host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <ASE Server Name> -dbtype syb -
dbinstance <Site
Name for ASE in RMA> -updatemethod Check -updateoption TASK=HB_VERSION .
2015 03/04 10:26:55.089 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
-updatemethod Execute -updateoption TASK=HEARTBEAT_STARTUP .
2015 03/04 10:26:41.952 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
-updatemethod Execute -updateoption TASK=HEARTBEAT_STOP .
This message typically occurs when the Fault Manager fails during the bootstrap cycle.
Action – Check /usr/sap/hostctrl/work/dev_sapdbctrl and /usr/sap/hostctrl/work/
dev_saphostexec for more information (these files require sudo access on the host).
● Symptom – dev_sybdbfm displays a message similar to:
2015 03/04 10:31:32.465 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
-updatemethod Check -updateoption TASK=REPLICATION_STATUS .
This message typically occurs when the Fault Manager fails during the bootstrap cycle.
Action – Check /usr/sap/hostctrl/work/dev_sapdbctrl and /usr/sap/hostctrl/work/
dev_saphostexec for more information (these files require sudo access on the host).
● Symptom – dev_sybdbfm displays a message similar to:
2015 01/08 03:44:13.423 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
-updatemethod Execute -updateoption TASK=RS_STANDBY_AVAILABLE.
This message typically occurs when the Fault Manager fails during the bootstrap cycle.
Action – Check /usr/sap/hostctrl/work/dev_sapdbctrl and /usr/sap/hostctrl/work/
dev_saphostexec for more information (these files require sudo access on the host).
● Symptom – dev_sybdbfm displays a message similar to:
2015 03/04 10:35:31.598 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
-updatemethod Execute -updateoption TASK=RESUME_REPLICATION.
This message typically occurs when the Fault Manager fails during the bootstrap cycle.
Action – Check /usr/sap/hostctrl/work/dev_sapdbctrl and /usr/sap/hostctrl/work/
dev_saphostexec for more information (these files require sudo access on the host).
● Symptom – dev_sybdbfm displays a message similar to:
2015 03/04 10:25:33.983 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
/usr/sap/hostctrl/exe/saphostexec
/usr/sap/hostctrl/exe/sapstartsrv
/usr/sap/hostctrl/exe/saposcol
/usr/sap/hostctrl/exe/saphostexec -restart
2015 03/04 10:25:38.700 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename> -updatemethod Check
-updateoption TASK=HA_VERSION
2015 03/04 10:25:32.814 executing: asehostctrl -host <ASE Host> -user sapadm
********
-function LiveDatabaseUpdate -dbname <DB Name> -dbtype syb -dbinstance <ASE
Sitename>
-dbuser DR_admin -dbpass ******** -updatemethod Execute -updateoption
TASK=SET_USER_PASSWORD
-updateoption USER=DR_ADMIN
This message occurs when the Fault Manager fails to set a username and password for a particular
component during the bootstrap cycle, typically when there is a password mismatch from the ones entered
into SecureStore when installing Fault Manager and the current passwords for the same user name.
Action – Reconfigure the Fault Manager to automatically add the latest user and password combinations,
or update the individual passwords using the rsecssfx binary (located in
<installation_directory>/FaultManager/bin/). Check /usr/sap/hostctrl/work/
dev_sapdbctrl and /usr/sap/hostctrl/work/dev_saphostexec for more information (these files
require sudo access on the host).
● Symptom – dev_sybdbfm displays a message similar to:
2015 03/04 10:33:12.892 executing: asehostctrl -host <ASE Host> -user sapadm
********
This message occurs during execution when the Fault Manager fails to fail over, which may be caused by
more than one component in the HADR setup failing, or by a network outage during failover.
Action – Check the health of all components listed in the last Fault Manager health report, or query RMA
with the sap_status path command. A report that describes more than a single component as
unavailable probably explains the failure of the failover. Restart any required components. If failover fails
due to an intermittent network outage, it will attempt the failover again when it is back online.
Check /usr/sap/hostctrl/work/dev_sapdbctrl and /usr/sap/hostctrl/work/
dev_saphostexec for more information (these files require sudo access on the host).
For errors that occur in the Fault Manager, you should reproduce the errors in a smaller system to determine
the root cause by scanning the error log for the failed command and replacing the asehostctrl command
with /usr/sap/hostctrl/exe/saphostctrl.
Executing these commands individually at the command line gives you greater control over reproducing the
error and accessing the root cause. Check /usr/sap/hostctrl/work/dev_sapdbctrl and /usr/sap/
hostctrl/work/dev_saphostexec for more details on what happens when you execute these commands
(these files require sudo access on the host).
To view diagnostic information from these log files,s set the trace level to 3:
● /usr/sap/hostctrl/exe/dev_saphostctrl
● /usr/sap/hostctrl/exe/dev_sapdbctrl
There are a number of issues that commonly cause errors in the HADR system.
● Directories do not have appropriate permissions – The installation directories for SAP ASE, Fault Manager,
execution directory, and /tmp require the appropriate permissions. The Fault Manager creates temporary
directories under /tmp, to which it adds temporary files. If permissions prevent it from doing this, the SAP
Host Agent call will fail but it will not know the reason. Verify that that the user executing the Fault Manager
has permissions on all the required directories.
ldd --version
● Enter the correct passwords for sa, DR_admin and sapadm – It is very difficult to find the root cause of
errors when password mismatches are the culprit, and it is best to verify the passwords are correct before
the errors occur. The SAP Host Agent installation may not include a default password for sapadm, but is
required by Fault Manager. Add or update a password with the passwd command.
An error that reads "Error: Invalid Credentials" during the Fault Manager installation indicates that
one of the components for which you entered information is offline, or that the username and password
combination for that component is incorrect.
Consequently, the Fault Manager cannot connect to that component to verify that it is running. This connection
fails only when the username and password combination are incorrect or the component is down.
The Fault Manager Cannot Connect to SAP ASE After Restarting the Host
When you restart the host, the SAP Host Agent and its services are also restarted according to how their
entries are added to init.d during installation.
However, installation changes the environment and starts the SAP Host Agent with the LANG environment
variable set to en_US.UTF-8.
The host may have been changed prior to a restart, and the SAP Host Agent may be started in an environment
that is different from what is specified in init.d. After you restart the host, you should source $SYBASE/
SYBASE.sh and restart the SAP Host Agent with this command:
/usr/sap/hostctrl/exe/saphostexec -restart
This Fault Manager error log indicates that the Fault Manager could not create a connection to the Host Agent.
The section of the error log that describes the error is:
This error occurs when sapstartsrv is not running. To solve the problem, issue this command to check if the
sapstartsrv process is running:
You can start and run the Fault Manager and the SAP Host Agent from the command line.
See The SAP Host Agent [page 27] for information about the SAP Host Agent commands. See Configuring the
Fault Manager from the Command Line [page 142] for information about the Fault Manager commands.
RMA Remote Method Invocation (RMI) is an API that RMA instances use in the HADR environment to invoke
methods from other RMA instances remotely. To support remote invocation, RMA RMI needs five ports: the
configured port it occupies, as well as the previous four consecutive ports.
Configuring a port number for RMA RMI in the setuphadr response file automatically reserves the four
consecutive integers before that configured number. For example, when the RMA RMI port number is
configured with 7000, 6999, 6998, 6997, and 6996 are also occupied by RMA RMI.
To avoid unexpected errors, make sure the configured port and its four predecessors are neither blocked by
firewalls nor used by any other applications.
If you have run into errors with your environment setup when using the setuphadr utility, troubleshoot the
error on each site using the following steps:
ps -ef|grep rma
rm -rf $SYBASE/DM/RMA-15_5/instances/AgentContainer/configdb
4. Find five consecutive port numbers that are all available, then modify the setuphadr response file for all
servers by changing rma_rmi_port to the highest-numbered port in your set of five consecutive ports. For
example:
From:
PRIM.rma_rmi_port=7000
To:
PRIM.rma_rmi_port=8000
From:
COMP.rma_rmi_port=7000
To:
COMP.rma_rmi_port=8000
5. Modify $SYBASE/DM/RMA-15_5/instances/AgentContainer/config/bootstrap.prop by
changing rmiport to the port number as configured in last step. For example:
From
rsge.bootstrap.rmiport=7000
To
rsge.bootstrap.rmiport=8000
Use commands, system procedures, and proxy tables to administer the HADR system.
Use the Replication Management Agent (RMA) commands to administer, monitor, or modify the properties of
your replication environment.
12.1.1 sap_add_device
Use the sap_add_device command to add SAP Replication Server device spaces. The sap_add_device
command issues an add partition command to the underlying SAP Replication Server that is defined for the
requested logical host.
For more details, see Replication Server Reference Manual > Replication Server Commands > create partition.
Syntax
Parameters
<logical_host_name>
Specifies the logical host name of the SAP Replication Server on which the device
space is to be added.
<logical_device_name>
Specifies the logical name of the device to be added.
<device_file>
Specifies the path to the directory in which you are creating the device and the name of
the device file.
<size>
Specifies the size of the device file (MB).
Example 1
Add 20 MB device space named part9 on site0 SAP Replication Server under /testenv7/
partition9.dat:
Example 2
If the command fails, find the error from the row where the TYPE value is Failing Command Error
Message:
Related Information
12.1.2 sap_cancel
Syntax
sap_cancel
go
12.1.3 sap_collect_log
Use the sap_collect_log command to collect the ASE/RS/RMA logs of different hosts in a HADR system to
one local directory.
Syntax
Parameters
<logical_host_name>
Specifies the logical host name to collect the logs from.
<number_of_days>
Specifies the number of days for which the RMA maintains the logs. The valid value is
between 1-20. The default value is 5.
Example 1
Sap_collect_log PR 10
Example 2
Collects logs from the PR site for the last 5 (default) days.
Sap_collect_log PR
Example 3
Collects logs from all the logical hosts in HADR for the last 10 days.
Sap_collect_log 10
Example 4
Collects logs from all the logical hosts in HADR for the last 5 days.
Sap_collect_log
Usage
After you execute the sap_collect_log command, the result shows the directory path where the logs are
saved. The name of the directory is same as the name of the logical host name.
12.1.4 sap_configure_rat
Allows you to configure the Replication Agent thread for SAP ASE (RepAgent for short).
Syntax
Parameters
redirect_to_er
Indicates primary RepAgent to redirect the connection to the external SAP Replication
Server.
redirect_to_ha
Indicates primary RepAgent to redirect the connection to the standby SAP Replication
Server.
<database> | All
Specifies the <database> parameter to execute the relevant operation to the specific
database. Use All to execute the relevant operation to the whole HADR environment.
<ER admin user>
Specifies the admin user of the external SAP Replication Server to allow RMA to
connect to it.
<ER admin password>
Specifies the admin user password of the external SAP Replication Server to allow RMA
to connect to it.
Example 1
LOGICAL HOST DB NAME PROPERTY NAME CONFIG VALUE RUN VALUE DEFAULT VALUE
RESTART REQUIRED
------------ ---------- --------------- ------------ ----------
------------- ----------------
site0 PI2 max commit wait 50 50
10000000 NO
LOGICAL HOST DB NAME PROPERTY NAME CONFIG VALUE RUN VALUE DEFAULT VALUE
RESTART REQUIRED
------------ ---------- --------------- ------------ ----------
------------- ----------------
site0 PI2 max commit wait 50 50
10000000 NO
Example 2
Note
In this example the value of the scan timeout parameter is 15 seconds. This is the default value and
is recommended to maintain an optimum CPU usage. The value of scan timeout parameter must be
1 or more. 0 as the value of scan timeout parameter is not valid.
Example 3
Redirect all databases in primary SAP ASE to connect to the external SAP Replication Server:
Example 4
Redirect all databases in primary SAP ASE to connect to the standby SAP Replication Server:
12.1.5 sap_configure_rs
Use the sap_configure_rs command to list and configure the SAP Replication Server and its database
connection properties. Use this command after the HADR environment is set up with the
sap_setup_replication and sap_materialize commands
Syntax
Parameters
all
Specifies that all logical hosts must be configured.
<logical_host_name>
Specifies that only the specified logical host must be configured.
RS
Specifies that the command refers to the server-level properties.
all
Specifies that the command refers to the connection-level properties of all the
database connections.
<database_name>
Note
If a static property is set and a suspend or resume cycle has not been performed, the Config Value is
different from the Run Value and the Restart Required value is set to Yes. To restart a connection,
execute sap_suspend_replication, then sap_resume_replication.
Examples
Example 1
sap_configure_rs HA, RS
go
LOGICAL HOST RS NAME DB NAME PROPERTY NAME CONFIG
VALUE RUN VALUE DEFAULT
VALUE RESTART REQUIRED
------------ ---------- ---------- ------------------------------
--------------------------------------- -----------------------------------
----------------------------------- ----------------
HA PI2_REP_HA stream_replication
false false
false
HA PI2_REP_HA cap_prs_num_threads
2 2
2
HA PI2_REP_HA cap_sqm_write_request_limit
8388608 8388608
8388608
HA PI2_REP_HA cap_sqm_write_msg_limit
5000 5000
5000
HA PI2_REP_HA spq_file_directory The path of
the first partition The path of the first partition The path of
the first partition
HA PI2_REP_HA spq_min_size
2048 2048
1024
Example 2
Sets the Replication Server property min_password_len to 12 on all Replication Servers in the DR
environment:
Example 3
Sets the Replication Server Database Connection property dsi_row_count_validation to on for all
database connections on all Replication Servers in the DR environment:
Example 4
Sets the Replication Server Database Connection property dsi_row_count_validation to on for all
PI2 database connections on all Replication Servers in the DR environment:
Usage
Logical Host, RS Name, DB Name, Property Name, Config Value, Run Value, Default
Value, Restart Required
12.1.6 sap_delay_replication
Delays replication to either a specified database or all the participating databases from the site. Delaying
replication from the primary database provides time to recover from any undesirable event, such as when a
table of records is dropped unexpectedly.
Although you can specify the default delay time with sap_set delay_time_minutes, issuing sap_set does
not enable delayed replication. Instead, use sap_delay_replication logical_host_name, on after you
configure the HADR environment with the sap_setup_replication and sap_materialize commands to
enable delayed replication.
Syntax
Parameters
<logical_host_name>
The logical host name of the current standby Replication Server.
<database_name>
The name of the database where replication is to be delayed.
<delay_time_minutes>
Delay time, in minutes. Valid values are 1 to 1439.
on
Enables delayed replication.
off
Disables delayed replication.
Example 1
Enables delayed replication on site HA for all the participating databases, using a default delay time value
set by sap_set delay_time_minutes:
sap_delay_replication HA, on
go
TASKNAME TYPE
VALUE
----------------- -----------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
---------------------------
Delay Replication Start Time Mon Nov 23 05:03:00 EST
2015
Example 2
Enables delayed replication and sets the delay time value to two minutes on all participating databases
configured for site HA:
----------------- -----------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
---------------------------
DelayReplication Hostname
site0
(9 rows affected)
Example 3
Enables delayed replication and sets the delay time value to two minutes on the PI2 database configured
for site HA:
----------------- -----------------
------------------------------------------------------------------------------
--------------------------------------------------
Delay Replication Start Time Mon Nov 23 05:08:14 EST
2015
DelayReplication Hostname
site0
(9 rows affected)
Example 4
sap_delay_replication HA
go
DATABASE NAME DELAY STATE RUNTIME VALUE DEFAULT VALUE
------------- ----------- ------------- -------------
PI2 ON 2 0
db1 ON 2 0
(2 rows affected)
Example 5
----------------- -----------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
----------------------------
Delay Replication Start Time Mon Nov 23 05:10:14 EST
2015
(9 rows affected)
Example 6
----------------- -----------------
------------------------------------------------------------------------------
--------------------
Delay Replication Start Time Mon Nov 23 05:09:17 EST
2015
Delay Replication Elapsed Time
00:00:02
(9 rows affected)
12.1.7 sap_disable_external_replication
Disables replication to an external Replication Server for either a specific database or for all the databases.
Syntax
sap_disable_external_replication[, <database>]
Parameters
<database>
Examples
Example 1
sap_disable_external_replication PI2
[Disable External Replication, Start Time, Tue Sep 13 06:05:25 EDT 2016]
[Disable External Replication, Elapsed Time, 00:00:08]
[DisableExternalReplication, Task Name, Disable External Replication]
[DisableExternalReplication, Task State, Completed]
[DisableExternalReplication, Short Description, Disable the flow of External
Replication]
[DisableExternalReplication, Long Description, Successfully disabled external
replication for database 'PI2'. Please execute
'sap_enable_external_replication PI2' to enable external replication for the
database.]
[DisableExternalReplication, Task Start, Tue Sep 13 06:05:25 EDT 2016]
[DisableExternalReplication, Task End, Tue Sep 13 06:05:33 EDT 2016]
[DisableExternalReplication, Hostname, site0]
12.1.8 sap_disable_replication
Syntax
sap_disable_replication <primary_logical_host_name> [,
<standby_logical_host_name>] [, <database_name>]
Parameters
<primary_logical_host_name>
Specifies the name of the logical host that identifies the primary site.
<standby_logical_host_name>
Specifies the name of the logical host that identifies the standby site.
<database_name>
Examples
Example 1
Disables the replication from primary host site0 to standby host site1 for database PI2:
sap_disable_replication site0,site1,PI2
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Disable Replication Start Time
Tue Sep 15 02:42:08 EDT 2015
Disable Replication Elapsed Time
00:00:04
DisableReplication Task Name
Disable Replication
DisableReplication Task State
Completed
DisableReplication Short Description
Disable the flow of Replication
DisableReplication Long Description
Successfully disabled Replication for database 'PI2'.
DisableReplication Task Start
Tue Sep 15 02:42:08 EDT 2015
DisableReplication Task End
Tue Sep 15 02:42:12 EDT 2015
DisableReplication Hostname
site0
(9 rows affected)
Example 2
In this example, the command fails and it finds the error from the row where TYPE value is Failing
Command Error Message:
sap_disable_replication site0,site1,ERP
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Disable Replication Start Time
Fri Nov 20 00:24:01 EST 2015
Disable Replication Elapsed Time
00:00:00
DisableReplication Task Name
Disable Replication
DisableReplication Task State
Error
DisableReplication Short Description
Example 3
Disables the replication from primary host site0 for all databases.
sap_disable_replication site0
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Disable Replication Start Time
Fri Nov 20 00:22:35 EST 2015
Disable Replication Elapsed Time
00:00:13
DisableReplication Task Name
Disable Replication
DisableReplication Task State
Completed
DisableReplication Short Description
Disable the flow of Replication
DisableReplication Long Description
Successfully disabled Replication for participating databases
'[master, saptools, PI2]'.
DisableReplication Task Start
Fri Nov 20 00:22:35 EST 2015
DisableReplication Task End
Fri Nov 20 00:22:48 EST 2015
DisableReplication Hostname
site0
(9 rows affected)
Usage
When the replication stops, you can only restart replication from the specified database or all databases by
enabling and rematerializing the affected databases.
Use the sap_drop_device command to drop an SAP Replication Server device space.
Syntax
Parameter
<logical_host_name>
Indicates the logical host name of the SAP Replication Server on which the device
space is to be dropped.
<logical_device_name>
Example
Example 1
Drop Replication Server Device Start Time Tue May 14 07:47:11 UTC 2019
Drop Replication Server Device Elapsed Time 00:00:00
DropDevice Task Name Drop Replication Server Device
DropDevice Task State Completed
DropDevice Short Description Drop the Replication Server
device with the specified name.
DropDevice Long Description Successfully dropped device
'part3' on host 'site0:14975'.
DropDevice Task Start Tue May 14 07:47:11 UTC 2019
DropDevice Task End Tue May 14 07:47:11 UTC 2019
DropDevice Hostname site0
Example 2
Related Information
12.1.10 sap_drop_host
Before the setup of the HADR system, lets you drop a host that was previously defined by using the
sap_set_host command.
Note
You cannot drop the host with the sap_drop_host command after you set up the HADR system. Instead,
use the sap_teardown command to tear down the HADR system, then run the sap_set_host command
and re-create the logical host.
Syntax
sap_drop_host <logical_host_name>
Parameters
<logical_host_name>
Examples
Example 1
sap_drop_host site0
go
TASKNAME TYPE
VALUE
----------- -----------------
--------------------------------------------------------------------
Drop Host Start Time
Mon Nov 16 00:39:33 EST 2015
Drop Host Elapsed Time
00:00:00
DropHostApi Task Name
Drop Host
DropHostApi Task State
Completed
DropHostApi Short Description
Drop the logical host from the environment.
DropHostApi Long Description
Submission of the design change for a model property was
successful.
DropHostApi Task Start
Mon Nov 16 00:39:33 EST 2015
DropHostApi Task End
Mon Nov 16 00:39:33 EST 2015
DropHostApi Hostname
site0
(9 rows affected)
If the command fails, finds the error from the row where TYPE value is Failing Command Error
Message:
sap_drop_host site0
go
TASKNAME TYPE
VALUE
----------- -----------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------
Drop Host Start Time
Mon Nov 16 04:38:05 EST
2015
00:00:00
Error
DropHostApi Hostname
site0
12.1.11 sap_enable_external_replication
Enables replication to an external Replication Server for either a specific database or for all the databases.
Syntax
sap_enable_external_replication[, <database>]
Parameters
<database>
Examples
Example 1
sap_enable_external_replication PI2
[Enable External Replication, Start Time, Tue Sep 13 06:07:15 EDT 2016]
[Enable External Replication, Elapsed Time, 00:00:02]
[EnableExternalReplication, Task Name, Enable External Replication]
[EnableExternalReplication, Task State, Completed]
[EnableExternalReplication, Short Description, Enable the flow of External
Replication]
[EnableExternalReplication, Long Description, Successfully enabled external
replication for database 'PI2'. The second truncation point of spq agent for
database 'PI2' has been reset.]
[EnableExternalReplication, Task Start, Tue Sep 13 06:07:15 EDT 2016]
[EnableExternalReplication, Task End, Tue Sep 13 06:07:17 EDT 2016]
[EnableExternalReplication, Hostname, site0]
12.1.12 sap_enable_replication
Syntax
sap_enable_replication <primary_logical_host_name> [,
<standby_logical_host_name>] [, <database_name>]
Parameters
<primary_logical_host_name>
Specifies the name of the logical host that identifies the primary site.
<standby_logical_host_name>
Specifies the name of the logical host that identifies the standby site.
<database_name>
Examples
Example 1
Enables the replication from primary host site0 to standby host site1 for database PI2:
sap_enable_replication site0,site1,PI2
go
TASKNAME TYPE
VALUE
------------------ -----------------
---------------------------------------------------------
Enable Replication Start Time
Fri Nov 20 00:41:19 EST 2015
Enable Replication Elapsed Time
00:01:36
EnableReplication Task Name
Enable Replication
EnableReplication Task State
Completed
EnableReplication Short Description
Enable the flow of Replication
EnableReplication Long Description
Successfully enabled Replication for database 'PI2'.
EnableReplication Task Start
Fri Nov 20 00:41:19 EST 2015
EnableReplication Task End
Fri Nov 20 00:42:55 EST 2015
EnableReplication Hostname
site0
(9 rows affected)
Example 2
In this example, the command fails and it finds the error from the row where TYPE value is Failing
Command Error Message:
sap_enable_replication site0,site1,ERP
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Enable Replication Start Time
Fri Nov 20 00:27:00 EST 2015
Enable Replication Elapsed Time
00:00:00
EnableReplication Task Name
Enable Replication
EnableReplication Task State
Error
EnableReplication Short Description
Example 3
Enables the replication from primary host 'site0' for all databases.
sap_enable_replication site0
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Enable Replication Start Time
Fri Nov 20 00:46:50 EST 2015
Enable Replication Elapsed Time
00:03:09
EnableReplication Task Name
Enable Replication
EnableReplication Task State
Completed
EnableReplication Short Description
Enable the flow of Replication
EnableReplication Long Description
Successfully enabled Replication for participating databases
'[master, db1, PI2]'.
EnableReplication Task Start
Fri Nov 20 00:46:50 EST 2015
EnableReplication Task End
Fri Nov 20 00:49:59 EST 2015
EnableReplication Hostname
site0
(9 rows affected)
Usage
Failover is switching activity to the standby site in the event of a failure on the primary site.
A planned failure occurs on a schedule. Typically as part of a test or other exercise, a planned failure allows for
an orderly sequence of steps to be performed to move processing to the standby site.
An unplanned failure is unscheduled, occurring unintentionally and without warning. However, a similar
sequence of events occur as in a planned failover.
Use the sap_failover command to perform planned and unplanned failovers. The sap_failover
command:
● Monitors replication to verify all paths from the primary database to the standby are complete. No
remaining in-flight data to be replicated exists for all SAP databases, master, and SAP_SID.
● Suspends the Replication Server at the standby site from applying any additional data from the primary.
● Configures and starts Replication Agent threads for each database in the standby server.
● Reconfigures the Replication Server to accept activity from the standby database.
Note
You cannot perform two sap_failover commands in parallel. That is, the first sap_failover command
must complete before you issue a second.
Syntax
Parameters
<primary_logical_host_name>
The name of the logical host that identifies the primary site.
<standby_logical_host_name>
The name of the logical host that identifies the standby site.
<deactivate_timeout>
Specifies the number of seconds the process will wait while deactivating the primary
data server. If the timeout reached, the failover process terminates.
force
(Optional) Causes the failover process to continue if the timeout value is reached.
Applicable for deactivate step. However, the failover may not be successful for a
number of reasons (for example, if there is a huge SPQ backlog)
<drain_timeout>
unplanned
(Optional) Specifies an unplanned failover.
Examples
Example 1
Performs a planned failover, when all the servers are up in the HADR system:
sap_failover PR HA 60
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Failover Start Time
Thu Nov 19 20:36:39 EST 2015
Failover Elapsed Time
00:00:00
DRExecutorImpl Task Name
Failover
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Failover moves primary responsibility from current logical source to logical
target.
DRExecutorImpl Long Description
Started task 'Failover' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine when task 'Failover'
is complete.
Failover Task Name
Failover
Failover Task State
Running
Failover Short Description
Failover moves primary responsibility from current logical source to logical
target.
Failover Long Description
Waiting 3 seconds: Waiting for the end of data marker for database 'master'
to be received.
Failover Current Task Number
6
Failover Total Number of Tasks
18
Failover Hostname
site0
(14 rows affected)
sap_status
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Failover Start Time
hu Nov 19 20:36:37 EST 2015
Failover Elapsed Time
00:00:06
Failover Task Name
Failover
Failover Task State
Completed
Failover Short Description
Failover moves primary responsibility from current logical source to logical
target.
Failover Long Description
Failover from source 'PR' to target 'HA' is complete. The target may be
unquiesced.
Failover Additional Information
Please run command 'sap_host_available PR' to complete disabling replication
from the old source, now that the target 'HA' is the new primary.
Failover Current Task Number
14
Failover Total Number of Tasks
14
Failover Task Start
19 20:36:37 EST 2015
Failover Task End
19 20:36:43 EST 2015
Failover Hostname
site0
(12 rows affected)
Example 2
Performs an unplanned failover, when the primary ASE server is down in the HADR system:
sap_failover PR HA 60 unplanned
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Failover Start Time
Thu Nov 19 20:57:20 EST 2015
Failover Elapsed Time
00:00:00
DRExecutorImpl Task Name
Failover
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Failover moves primary responsibility from current logical source to logical
target.
DRExecutorImpl Long Description
Started task 'Failover' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine when task 'Failover'
is complete.
Failover Task Name
Failover
sap_status
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
Failover Start Time
hu Nov 19 20:57:18 EST 2015
Failover Elapsed Time
00:00:06
Failover Task Name
Failover
Failover Task State
Completed
Failover Short Description
Failover moves primary responsibility from current logical source to logical
target.
Failover Long Description
Failover from source 'PR' to target 'HA' is complete. The target may be
unquiesced.
Failover Additional Info
When the source site for 'PR' is available, please run command
'sap_host_available PR' to disable replication from that source, now that the
target 'HA' is the new primary.
Failover Additional Info 2
The primary Replication Server 'site1:5005' is configured for
synchronization mode and was found running in synchronization mode.
Failover Current Task Number
12
Failover Total Number of Tasks
12
Failover Task Start
19 20:57:18 EST 2015
Failover Task End
19 20:36:24 EST 2015
Failover Hostname
site0
(13 rows affected)
The sap_failover_drain_to_er command makes sure that the incremental backlogs from the HADR
cluster are drained to the external replication system. Use this command while performing a failover within an
HADR cluster with external replication.
Syntax
Parameters
<time_out>
Specifies the number of seconds the command waits for the remaining backlogs to be
fully applied to the external system. If timeout is reached and the draining of backlogs
to the external replication system is not finished and the
sap_failover_drain_to_er command reports an error, retry this command with a
higher <time_out> value.
skip
Forces the failover process to continue without applying the remaining backlogs to the
external replication system. The skip option disables replication to the external
replication system, causing the external replicate databases be out of sync with the
HADR cluster.
<dbName>
Specifies the database on the external system to which replication is disabled from the
HADR cluster. If you do not specify a database name following the skip parameter, you
disable replication from the HADR cluster to all databases on the external system.
Examples
Example 1
Drains the transaction backlogs to the external replication system with a timeout of 120 seconds:
sap_failover_drain_to_er 120
go
sap_status
go
----------------- ---------------------
------------------------------------------------------------------------------
-----------------------------------------
Status Start Time Wed Sep 07 12:01:00 UTC
2016
Status Elapsed Time
00:00:36
Example 2
Skips the transfer of the transaction backlog to the external replication system when performing a failover
within an HADR cluster:
sap_failover_drain_to_er skip
go
sap_status
go
TASKNAME TYPE
VALUE
----------------- ---------------------
------------------------------------------------------------------------------
-----------------------------------------
Status Start Time Wed Sep 07 12:23:29 UTC
2016
Status Elapsed Time
00:00:09
Example 3
Disables the replication to the database called erp on the external system when performing a failover
within an HADR cluster:
sap_status
go
TASKNAME TYPE VALUE
----------------- ---------------------
------------------------------------------------------------------------------
-----------------------------------------
Status Start Time Wed Sep 14 05:13:34 EDT 2016
Status Elapsed Time 00:00:11
FailoverDrainToER Task Name Failover drain to ER.
FailoverDrainToER Task State Completed
FailoverDrainToER Short Description Failover drain to ER deactivate old
replication path and activate new replication path for external replication
system.
FailoverDrainToER Long Description Issuing command to suspend SPQ Agents.
FailoverDrainToER Additional Info Please run command
'sap_failover_drain_to_er ' to continue the failover drain to external
replication operation.
FailoverDrainToER Current Task Number 1
FailoverDrainToER Total Number of Tasks 1
FailoverDrainToER Task Start Wed Sep 14 05:13:34 EDT 2016
FailoverDrainToER Task End Wed Sep 14 05:13:45 EDT 2016
FailoverDrainToER Hostname site0
After you use sap_failover_drain_to_er skip <dbName> to disable replication to a database on the
external system, run sap_failover_drain_to_er <timeout> to make sure all backlogs on other databases
are drained to the replicate databases.
12.1.15 sap_help
Displays a list of available commands, or detailed information for the specified command.
Syntax
sap_help [<command>]
go
12.1.16 sap_host_available
Use the sap_host_available command to reconfigure the primary database as the new backup for the
activity occurring at the standby site.
● Disables the Replication Agents on the requested site for the master and SAP_SID databases in an SAP
environment so that no data is replicated out from this site.
● Reconfigures the Replication Server to not accept activity from the requested site.
● Purges the Replication Server queues of any possible in-flight data.
● Resets the Replication Server at the current standby site to allow application of future activity, in the event
a subsequent failover back to the primary site is needed.
Syntax
Parameters
<primary_logical_host_name>
Examples
Example 1
sap_host_available PR
go
TASKNAME TYPE
VALUE
------------------- -----------------
----------------------------------------------------------
HostAvailable Start Time
Thu Nov 19 20:47:34 EST 2015
HostAvailable Elapsed Time
00:01:31
HostAvailable Task Name
HostAvailable
HostAvailable Task State
Completed
HostAvailable Short Description
Resets the original source logical host when it is available after
failover.
HostAvailable Long Description
Completed the reset process of logical host 'PR' receiving
replication from logical host 'HA'.
HostAvailable Current Task Number
10
HostAvailable Total Number of Tasks
10
HostAvailable Task Start
Thu Nov 19 20:47:34 EST 2015
HostAvailable Task End
Thu Nov 19 20:49:05 EST 2015
HostAvailable Hostname
site0
(11 rows affected)
Example 2
Reconfigures the primary database after planned failover after unplanned failover and restarting the
primary ASE server:
sap_host_available PR
go
TASKNAME TYPE
VALUE
------------------- -----------------
12.1.17 sap_list_device
Use the sap_list_device command to list information for devices added to SAP Replication Server.
Syntax
Parameter
<logical_host_name>
Indicates the logical host name of the SAP Replication Server for which the device
information is to be listed.
<logical_device_name>
Indicates the logical device name for which the information is to be listed. If not
indicated, RMA lists information for all devices on the specified SAP Replication Server.
Example 1
List information for all devices added in site0 SAP Replication Server:
sap_list_device site0
go
LOGICAL HOST LOGICAL PARTITION NAME PHYSICAL PARTITION NAME TOTAL SIZE USED
SIZE STATE
------------ ---------------------- ------------------------ ----------
--------- ----------
site0 part01 /testenv7/partition1.dat 1024
144 ON-LINE///
site0 part2 /testenv7/partition2.dat 16
0 ON-LINE///
Example 2
List information for the device named part01 added in site0 SAP Replication Server:
LOGICAL HOST LOGICAL PARTITION NAME PHYSICAL PARTITION NAME TOTAL SIZE USED
SIZE STATE
------------ ---------------------- ------------------------ ----------
--------- ----------
site0 part01 /testenv7/partition1.dat 1024
144 ON-LINE///
Related Information
Performs the initial copy of data from one site to the other.
Syntax
Parameters
auto
Performs automatic materialization. The auto option is the only option available for the
master database materialization to manage consistently which tables are copied. For
other databases, you can use auto or manual materialization method.
start
Configures the replication to anticipate the dump marker, generated by the dump
command.
retry
Retries automatic materialization.
external
Materializes a standby database without using the dump and load solution. The
external keyword skips the materialization process and sets up an active replication
path between the primary and the standby databases.
imprint
Validates materialization before starting the external materialization process. If the
external process requires the database to be offline, use imprint to add the
verification row.
finish
Verifies that the verification row exists in the standby database.
force
Bypasses the verification test and finish the materialization process.
<source_logical_hostname>
Specifies the logical host name of the source.
<target_logical_hostname>
Specifies the logical host name of the target.
<database>
Specifies the name of the database.
Examples
Example 1
Automatically performs the initial data copy from primary host site0 to standby host site1 for master
database:
sap_materialize auto,site0,site1,master
go
TASKNAME TYPE
VALUE
-------------- -----------------
------------------------------------------------------------------------------
----------------------------
Materialize Start Time
Fri Nov 20 01:13:51 EST 2015
Materialize Elapsed Time
00:00:02
DRExecutorImpl Task Name
Materialize
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Materialize database
DRExecutorImpl Long Description
Started task 'Materialize' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine
when task 'Materi
alize' is complete.
Materialize Task Name
Materialize
Materialize Task State
Running
Materialize Short Description
Materialize database
Materialize Long Description
Starting materialization of the master database from
source 'site0' to
target 'site1'.
Materialize Task Start
Fri Nov 20 01:13:51 EST 2015
Materialize Hostname
site0
PerformMasterMaterialization Task Name
Materialize the Master database
PerformMasterMaterialization Task State
sap_status task
go
TASKNAME TYPE
VALUE
----------- -----------------
------------------------------------------------------------------------------
-----------------
Status Start Time
Fri Nov 20 01:13:51 EST 2015
Status Elapsed Time
00:00:28
Materialize Task Name
Materialize
Materialize Task State
Completed
Materialize Short Description
Materialize database
Materialize Long Description
Completed automatic materialization of database 'master'
from source 's
ite0' to target 'site1'.
Materialize Task Start
Fri Nov 20 01:13:51 EST 2015
Materialize Task End
Fri Nov 20 01:14:19 EST 2015
Materialize Hostname
site0
(9 rows affected)
Example 2
If the command fails, finds the error from the row where TYPE value is Failing Command Error
Message:
sap_status task
go
TASKNAME TYPE
VALUE
------------------------- -----------------------------
00:00:13
Materialize
Error
Materialize Hostname
site0
DropSubscriptionWithForce Hostname
site0
Error
site0
Example 3
Manually performs the initial data copy from primary host site0 to standby host site1 for PI2 database:
TASKNAME TYPE
VALUE
-------------- -----------------
------------------------------------------------------------------------------
----------------------------------------
Materialize Start Time
Fri Nov 20 01:33:30 EST 2015
Materialize Elapsed Time
00:00:01
DRExecutorImpl Task Name
Materialize
DRExecutorImpl Task State
Completed
DRExecutorImpl Short Description
Materialize database
DRExecutorImpl Long Description
Started task 'Materialize' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine
when task 'Materi
alize' is complete.
Materialize Task Name
Materialize
Materialize Task State
Completed
Materialize Short Description
Materialize database
Materialize Long Description
Adding the subscription required for materialization of
database 'PI2' to the Replication Server on host 'site1'.
Materialize Task Start
Fri Nov 20 01:33:30 EST 2015
Materialize Task End
Fri Nov 20 01:33:31 EST 2015
Materialize Hostname
site0
(14 rows affected)
TASKNAME TYPE
VALUE
-------------- -----------------
------------------------------------------------------------------------------
------------
Materialize Start Time
Fri Nov 20 01:43:39 EST 2015
Materialize Elapsed Time
00:00:02
DRExecutorImpl Task Name
Materialize
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Materialize database
DRExecutorImpl Long Description
Started task 'Materialize' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine
when task 'Materi
alize' is complete.
Materialize Task Name
Materialize
Materialize Task State
Running
Materialize Short Description
Materialize database
Materialize Long Description
Validating user specified arguments.
Materialize Task Start
Fri Nov 20 01:43:39 EST 2015
Materialize Hostname
site0
(13 rows affected)
Usage
● During materialization, sap_materialize drops the database users from the database. If it cannot drop
the users after 20 attempts (waiting 10 seconds between each attempt), it forcibly removes them with a
kill with force command.
● When you manually materialize the replicate database using the sap_materialize start command,
RMA prompts you to dump the database on the primary node with a specified label. When you materialize
the replicate database automatically using sap_materialize auto, RMA dumps the database with the
specified label internally. This ensures that the replication restarts only after the labeled database dump is
loaded.
Check the status by using the sap_status command (see the bold text):
sap_status
go
TASKNAME TYPE
VALUE
----------- -----------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
----------
Status Start Time Wed Sep 28 21:56:47 EDT
2016
Materialize Long Description The prerequisite work for manually dumping and
loading database PI2 is finished. You can use "dump database PI2 to ... with
Materialize Hostname
site0
(9 rows affected)
12.1.19 sap_pre_setup_check
Use the sap_pre_setup_check command to test operating system permissions, database user roles and
privileges, and host network port availability. Use the sap_pre_setup_check command before you use the
sap_setup_replication or sap_materialize command.
Syntax
Parameters
<env_type>
Specifies the type of environment the presetup check process validates. DR Agent
supports the disaster recovery dr option, which configures the environment using the
ASE HADR.
<primary_logical_host_name>
Specifies the name of the logical host that identifies the primary site.
<standby_logical_host_name>
Specifies the name of the logical host that identifies the standby site
Example 1
Usage
Before executing the replication setup command, correct any errors returned by sap_pre_setup_check.
sap_setup_replication executes the same set of tests as part of the setup process.
12.1.20 sap_purge_trace
Syntax
Parameter
<logical_host_name>
Specifies the logical host name that identifies the primary, standby, or DR site.
<days_to_keep>
Specifies the number of days for which you want to retain the trace information that
was inserted into rs_ticket_history. Trace information inserted before the
specified number of days is purged from the table. The valid values are positive integers
between 1 and 365.
<database_name>
Example
Example 1
sap_purge_trace site0,7,master
go
Example 2
sap_purge_trace site1,7
go
12.1.21 sap_resume_component
Parameters
<cpnt_name>
Specifies the name of the component to be resumed.
<src_hostname>
Specifies the source host name.
<tgt_hostname>
Specifies the target host name.
<db_name>
Specifies the database name.
Examples
Example 1
Example 3
Usage
● RAT
● RATCI
Set the cpnt parameter to ALL to resume all the supported components.
12.1.22 sap_resume_replication
Use the sap_resume_replication command to resume the replication to a specified database or all
databases that are in the participating databases list (master and ERP).
Syntax
Parameters
<standby_logical_host_name>
Specifies the logical host name of the standby server.
<database_name>
Specifies the name of the database.
12.1.23 sap_send_trace
Latency calculations are based on the most recent trace flag sent through the system. Internally, this
command inserts an rs_ticket into the source database or databases. Latency is calculated from the most
recent entry in the target database's rs_ticket_history table by the sap_status task command. While
executing the sap_send _trace command, specify a database name. If you do not specify a database name,
a trace is sent to all participating databases for that host.
Parameters
<primary_logical_host_name>
Specifies the logical host name of the current primary Replication Server.
<database_name>
Specifies the name of the database where its latency is to be monitored.
Examples
Example 1
Sends trace on primary logical host site0 so that the latency for all participating databases is calculated by
using the sap_status path command:
sap_send_trace site0
go
If the command fails, finds the error from the row where TYPE value is Failing Command Error
Message:
sap_send_trace site0
TASKNAME TYPE
VALUE
---------------------- -----------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
----------------------------------------------------------------------
Execute sap_send_trace Start Time
Mon Nov 16 04:42:02 EST
2015
00:00:00
Error
DomainSendTrace Hostname
site0
Error
SendTrace.execute
2812
site0
Example 2
Sends trace on primary logical host site0 for database PI2 so that the replication latency of PI2 is
calculated by using the sap_status path command:
Use the sap_set command to read or set the initial configuration parameter values that are required for
setting up the replication system for ASE HADR.
Syntax
Parameters
<global_level_property>
Set the values for the properties that apply to the whole environment, the properties
are as follows:
installation_mode Specifies the HADR system type. For Custom HADR, the mode
is nonBS.
memory_size Specifies the memory limit for SAP Replication Server instan
ces on all HADR nodes. The value of memory_size ranges
from 1 GB to 2097151 GB. You can set this property at any time,
either before or after the HADR system is setup. If unset, RMA
tunes the memory limit for SAP Replication Server automati
cally.
<logical_host_name>, <property_name>
Set the values for the properties that apply to the specified logical host, the properties
are as follows:
ase_port Specifies the SAP ASE server port number of the logical host.
ase_user Specifies the login name used to connect to ASE server of this
logical host. This is a read-only property.
ase_backup_server_port Specifies the backup server port number of the logical host.
ase_instance Specifies the names of the SAP ASE server. This is a read-only
property.
dr_plugin_port Specifies the RMA port number of the logical host. This is a
read-only property.
Examples
Example 1
Example 2
Example 3
Specifies the HADR system type. For Custom HADR, the mode is nonBS:
Example 4
Sets the SAP ASE server port number to 5000:
Example 5
Example 6
Delays the site by 60 minutes:
Example 8
Example 9
Example 10
Sets the port number of the Replication Server to 5005:
Example 11
Sets the port number of the Replication Server ERSSD to 5006:
Example 12
Sets the port number for Replication Agent of ERSSD to 5007:
Example 13
Example 14
Sets the partition device size to 256 MB:
Example 15
Example 16
Sets the minimum size of the Replication Server simple persistent queue to 2000 MB:
Example 17
Sets the maximum size of the Replication Server simple persistent queue to 8000 MB:
● Execute sap_set without any parameter to query the configured value of the memory_size parameter.
Use the sap_set device_buffer_size command to adjust the amount of memory (partition disk space)
that is allocated to the device buffer.
For optimum performance of the Replication Server, the valid range of the buffer size is 256 MB to 1 TB.
Syntax
Parameters
logical_hostname
Specifies the name of logical Replication Server host to set the device buffer size
size
Specifies the size of the buffer that you want to set, in megabytes.
Examples
Example 1
Shows how to change partition disk space to 300 MB, where vegas is the logical host::
Use the sap_set simple_persistent_queue_size command to specify the minimum disk space to
allocate to the simple persistent queue (SPQ) in the SAP Replication Server.
Note
SAP Replication Server creates one simple persistent queue of the size that you specify for each database
that is to be replicated. The default minimum disk space that it allocates to each SPQ is 1000 MB and it is
equal to the default maximum disk space size. Specify the minimum and maximum space of the SPQ
based on the physical environment of the HADR system. Adjusting the persistent queue size is optional.
During initialization, the SPQ creates two data files with sizes equal to:
simple_persistent_queue_size/2
As the SPQ fills with data, it increases in size, and the system creates additional SPQ data files. The maximum
size of these data files is also:
simple_persistent_queue_size/2
The SPQ is full when the total size of all SPQ data files reaches the size of the
simple_persistent_queue_max_size, and any data that can be truncated is automatically truncated.
Note
The minimum SPQ size for the master database is controlled by the spq_min_size entry in the <
$SYBASE>/DM/RMA-16_0/instances/AgentContainer/config/RS_DB_master.properties. Alter
this value before configuring a new HADR system.The default value is 500 MB. For example (in bold below):
# This file defines RepServer database connection properties that are set
# when replication is set up.
#
db_packet_size=16384
dsi_cmd_batch_size=65536
spq_min_size=500M
. . .
dsi_serialization_method=wait_after_commit
Change the parameter on the Replication Server command line to alter the minimum SPQ size for an
existing HADR system.
Syntax
<logical_host_name>
Specifies the name of logical replication server host to set the simple persistent queue
max size.
<spq_dir_size>
Specifies the queue size that you want to set in megabytes.
Examples
Example 1
Changes the minimum SPQ disk space to 10,000 MB, where vegas is the logical host:
Use the sap_set simple_persistent_queue_max_size command to specify the maximum disk space to
allocate to the simple persistent queue (SPQ) in the SAP Replication Server.
Note
SAP Replication Server creates one simple persistent queue of the size that you specify for each database
that is to be replicated. The default minimum disk space that tit allocates to each SPQ is 1000 MB and it is
equal to the default maximum disk space size. Specify the minimum and maximum space of the SPQ
based on the physical environment of the HADR system. Adjusting the persistent queue size is optional.
During initialization, the SPQ creates two data files with sizes equal to:
simple_persistent_queue_size/2
As the SPQ fills with data, it increases in size, and the system creates additional SPQ data files. The maximum
size of these data files is also:
simple_persistent_queue_size/2
The SPQ is full when the total size of all SPQ data files reaches the size of the
simple_persistent_queue_max_size, and any data that can be truncated is automatically truncated.
The minimum SPQ size for the master database is controlled by the spq_min_size entry in the <
$SYBASE>/DM/RMA-16_0/instances/AgentContainer/config/RS_DB_master.properties. Alter
this value before configuring a new HADR system. The default value is 500MB. For example (in bold below):
# This file defines RepServer database connection properties that are set
# when replication is set up.
#
db_packet_size=16384
dsi_cmd_batch_size=65536
spq_min_size=500M
. . .
dsi_serialization_method=wait_after_commit
Change the parameter on the Replication Server command line to alter the minimum SPQ size for an
existing HADR system.
Syntax
Parameters
<logical_host_name>
Specifies the name of logical replication server host to set the simple persistent queue
max size
<spq_dir_size>
Specifies the queue size that you want to set in megabytes.
Examples
Example 1
Changes the maximum SPQ disk space to 10,000 MB, where vegas is the logical host:
Use the sap_set memory_size command to set the memory limit for SAP Replication Server instances on all
HADR nodes. The value of memory_size ranges from 1 GB to 2097151 GB. You can set this property either
before or after the HADR system is set up. If not set, RMA tunes the memory limit for SAP Replication Server
automatically.
Syntax
Example
Set the memory limit for SAP Replication Server instances to 100 GB on all HADR nodes:
sap_set
PROPERTY VALUE
------------------------ ------------
maintenance_user ERP_maint
sap_sid ERP
installation_mode BS
participating_databases [master,ERP]
connection_timeout 5
connection_alloc_once true
memory_size 100
Setting memory limit for SAP Replication Server using the memory_size parameter does the following:
● If the memory_size parameter is not set before the initial setup, RMA automatically tunes the memory
limit for SAP Replication Server when setting up the system or adding and removing databases and the DR
host.
● If the memory_size parameter is set before initial setup, RMA uses the value as the memory limit for SAP
Replication Server when setting up and does not tune the memory limit for SAP Replication Server when
adding and removing databases and the DR host.
● If the memory_size parameter is not set after initial setup, RMA changes the memory limit from the
previous set value to the auto-calculated value and tunes the memory limit when adding and removing
databases and the DR host.
● If the memory_size parameter is set after the initial setup, RMA changes the memory limit for SAP
Replication Server from the auto-calculated value to the specified value and does not tune the memory
limit for SAP Replication Server when adding and removing databases and the DR host.
● After executing sap_set memory_size to change the memory_size parameter, RMA uses the new value
as the memory limit for SAP Replication Server immediately.
● If the execution of sap_set memory_size fails to change the memory limit for some SAP Replication
Server instances, the memory limit in SAP Replication Server instances becomes inconsistent. Resolve the
errors that cause the failure manually and then execute the command sap_set memory_size again.
Check if the new value is set successfully using the sap_set command.
12.1.25 sap_set_databases
Syntax
Parameters
<database_name>
Specifies the name of the database to replicate.
<additional_database_name>
Specifies a comma-separated list of additional databases to replicate.
Example 1
Sets master, db1, and PI2 as the databases to participate in the new replication environment.
(9 rows affected)
Usage
The database list is verified, persisted, and used to validate any SAP commands specifying a database name. If
a command's database name does not exist in the database list, the command is rejected.
Do not set the saptools database for replication; SAP ASE HADR does not support the replication for the
saptools database.
12.1.26 sap_set_host
Use the sap_set_host command to register a new HADR logical host. The logical host consists of an ASE
server, a Replication Server, and an RMA.
Syntax
Parameters
<logical_host_name>
Specifies the logical host name to reference the site. The <logical_host_name>
must have 10 characters or fewer and contain only digits or letters.
<ase_host_name>
Specifies the TCP/IP host name of the SAP ASE data server.
<ase_port_num>
Specifies the TCP/IP port number the SAP ASE data server is listening on.
<rs_host_name>
Specifies the TCP/IP host name of the SAP Replication Server.
<rs_port_num>
Specifies the TCP/IP port number the SAP Replication Server is listening on.
<dr_agent_port_num>
Specifies the TCP/IP port number the DR Agent is listening on (DR Agent is on the
same host as the SAP Replication Server).
12.1.27 sap_set_password
Use the sap_set_password command to set or change the password for DR_admin on the primary SAP ASE,
SAP Replication Server, and on the standby SAP ASE and SAP Replication Server.
The primary and standby SAP ASE and SAP Replication Server contain the system administrator DR_admin
login. The Replication Management Agent authenticates its login by attempting to log in to one of these
servers. This requires the DR_admin password to be the same on all servers. After the execution of the
sap_set_password command, you must log out and then back in to the Replication Management Agent for
the password change to take effect.
Syntax
<current_password>
Specifies the current password.
<new_password>
Specifies the new password.
Examples
Example 1
Changes the password for DR_admin on both the primary and standby sites:
SID: ERP
Participated DB: master, ERP_1,ERP_2,ERP
primary host: site0.sap.com
standby host: site1.sap.com
Executing 'set replication off' on the ASE at site1.sap.com:12510
Executing 'alter login 'DR_admin' with password '********' modify password
'********'' on the ASE at site1.sap.com:12510.
Executing 'set replication on' on the ASE at site1.sap.com:12510.
Executing 'set replication off' on the ASE at site0.sap.com:12895.
Executing 'alter login 'DR_admin' with password '********' modify password
'********'' on the ASE at site0.sap.com:12895.
Executing 'set replication on' on the ASE at site0.sap.com:12895.
Executing 'alter user DR_admin set password '********' verify password
'********'' on the Replication Server at site1.sap.com:12505.
Executing 'alter user DR_admin set password '********' verify password
'********'' on the Replication Server at site0.sap.com:12890.
Executing 'alter user ERP_RA_site1 set password '********' verify password
'********'' on the Replication Server at site1.sap.com:12505.
Executing 'alter user ERP_RA_site0 set password '********' verify password
'********'' on the Replication Server at site0.sap.com:12890.
Executing 'use master' on the ASE at site1.sap.com:12510.
Executing 'set replication off' on the ASE at site1.sap.com:12510.
Executing 'sp_config_rep_agent master,'rs password', '********'' on the ASE
at site1.sap.com:12510.
Executing 'set replication on' on the ASE at site1.sap.com:12510.
Executing 'use ERP_2' on the ASE at site1.sap.com:12510.
Executing 'set replication off' on the ASE at site1.sap.com:12510.
Executing 'sp_config_rep_agent ERP_2,'rs password', '********'' on the ASE at
site1.sap.com:12510.
Executing 'set replication on' on the ASE at site1.sap.com:12510.
Executing 'use ERP' on the ASE at site1.sap.com:12510.
Executing 'set replication off' on the ASE at site1.sap.com:12510.
Executing 'sp_config_rep_agent ERP,'rs password', '********'' on the ASE at
site1.sap.com:12510.
Executing 'set replication on' on the ASE at site1.sap.com:12510.
Executing 'use ERP_1' on the ASE at site1.sap.com:12510.
Executing 'set replication off' on the ASE at site1.sap.com:12510.
Executing 'sp_config_rep_agent ERP_1,'rs password', '********'' on the ASE at
site1.sap.com:12510.
Executing 'set replication on' on the ASE at site1.sap.com:12510.
Executing 'use master' on the ASE at site0.sap.com:12895.
● The following login accounts use the same DR_admin password, so the execution of sap_set_password
also resets the passwords for these login accounts:
○ Rep Agent user in SAP Replication Server and Replication Agent thread for SAP ASE configuration
○ RSSD primary user
○ RSSD maint user
○ HADR external logins
○ ID Server user
● If the password change fails, do not try to change the password again, or a password inconsistency issue
might occur. Reset the passwords manually using the following procedures:
1. Check if you can log in to the primary SAP ASE server as DR_admin using the new password. If the
login fails, it means that the password was not changed successfully. Use the following command to
change the password:
2. Log in to the standby SAP ASE server using the same process as for the primary server.
3. Using the new external login password, check if you can log in to the remote SAP ASE server by
executing <remote_ASE>...sp_help. If the login fails, it means that the password is not changed
successfully, use the following command to change the password:
set replication on
go
4. Execute the same operation on the standby SAP ASE server as you performed on the primary server.
5. Using the new external login password, check if you can log in to the local RMA by executing <local
ASE Server name>_DRA...hadrstatuspath. If the login fails, it means that the password was not
changed successfully. Use the following command to change the password:
6. Execute the same operation on the standby SAP ASE server as you performed on the primary server.
7. For all HADR participating databases on the primary SAP ASE, execute the following command:
8. Execute the same operation on the standby SAP ASE server as you performed on the primary server.
9. Restart RepAgent for the primary SAP ASE server. This step is not required for the standby SAP ASE
server.
10. Using the new password, check if you can log in to the SAP Replication Server on the primary SAP ASE
server as DR_admin. If the login fails, it means that the password was not changed successfully. Use
the following command to change the password:
alter user DR_admin set password <new password> verify password <old
password>
11. Execute the same operation on the SAP Replication Server on the standby SAP ASE server as you
performed on the primary server.
12. Using the new password, check if you can log in to the SAP Replication Server on the primary SAP ASE
server as the RepAgent user <SID>_RA_<logical host name>. If the login fails, it means that the
password was not changed successfully. Use the following command to change the password:
alter user <SID>_RA_<logical host name> set password <new password> verify
password <old password>
13. Execute the same operation on the SAP Replication Server on the standby SAP ASE server as you
performed on the primary server.
14. Using the new password, check if you can log in to the SAP Replication Server on the primary SAP ASE
server as the RSSD primary user <RS server name>_RSSD_prim. If the login fails, it means that the
password was not changed successfully. Use the following command to change the password:
alter user <RS server name>_RSSD_prim set password <new password> verify
password <old password>
15. Execute the same operation on the SAP Replication Server on the standby SAP ASE server as you
performed on the primary server.
16. Using the new password, check if you can log in to RSSD on the primary SAP ASE server as the RSSD
maintenance user <SID>_REP_<primary logical host name>_RSSD_maint. If the login fails, it
means that the password was not changed successfully. Use the following command to change the
password:
17. Execute the same operation on the standby SAP ASE server as you performed on the primary server.
18. Check if the connection to <RS server name>_RSSD.<RS server name>_RSSD is available by
logging in to the SAP Replication Server on the primary SAP ASE server. if the connection is
unavailable, it means that the password was not changed successfully. Log in to RSSD as the RSSD
primary user <RS server name>_RSSD_prim and execute the following command to change the
password:
19. Execute the same operation on the standby SAP ASE server as you performed on the primary server.
alter user <RS server name>_id set password '********' verify password
'********
Refer to SAP Note 2185942 for more information about how to update the id server password.
21. Perform the same operations on the standby SAP ASE server as you did on the primary server.
12.1.28 sap_set_replication_service
Use the sap_set_replication_service command to update the Replication Server Windows Service
credentials.
Using the sap_set_replication_service command you can also restart the Replication Server using the
Windows Service. The Windows Service does not exist until the rs_init utility is executed to create the
Replication Server.
Syntax
Parameters
<logical_host_name>
Specifies the name of the logical host.
create
Defines the Windows service on the logical host.
restart
Restarts the Replication Server on the logical host using the Windows Service for the
Replication Server belonging to that logical host.
<domain_name>
Specifies the domain name.
<user_name>
Specifies the username.
<password>
Specifies the password.
Example 1
Defines a Windows Service on the local machine with the credentials provided:
Example 2
Defines a Windows Service on myhost logical host for the Replication Server belonging to that logical host:
By default, credentials default to the local system account login. You can change the login credentials using
subsequent call to change the credentials.
Example 3
Restarts Replication Server on myhost logical host using the Windows Service for the Replication Server
belonging to that logical host:
Example 4
Sets or changes the login credentials for the Windows Service on myhost logical host to use SAP username
and SAP password:
Usage
● If the user name or password includes non alphanumeric characters, such as “@” , enclose them in double
quotation marks. For example:
● If you are in the local host domain, use .\<user name> instead of localhost\<user name>. For
example, if sapuser is in the local host domain, the command is:
Syntax
Parameters
<env_type>
Specifies the type of environment the presetup check process validates. DR Agent
supports the disaster recovery "dr" option, which configures the environment using the
SAP ASE HADR.
<primary_logical_host_name>
Specifies the name of the logical host that identifies the primary site.
<standby_logical_host_name>
Specifies the name of the logical host that identifies the standby site.
Usage
This command is executed asynchronously (in the background) and may take 30 or more minutes to complete.
The order of the logical host names dictates the direction of replication (from primary to standby). The setup
command returns immediately, indicating the setup task has been successfully started and is running
asynchronously.
Use the sap_sql_replication command to enable and disable SQL statement replication, configure
threshold, and display SQL statement settings.
Syntax
<option> ::= { U | D | I | S }
sap_sql_replication {<database> | All}, display
sap_sql_replication {<database> | All}, threshold, <value>
Parameter
<database>
Specifies to enable and disable SQL statement replication, or to configure and display
SQL statement settings for a specific database.
All
Indicates the system to enable and disable SQL statement replication, or to configure
and display SQL statement settings for the whole HADR environment.
on | off
Enables or disables SQL statement replication.
<option>[<option>][…]
Specifies the DML operations you want to enable or disable in SQL statement
replication. The options are:
● U – update
● D – delete
● I – insert select
● S – select into
<table>[,<table>][,…]
Specifies to enable or disable SQL statement replication for specific tables by name.
threshold, <value>
By default, SQL statement replication is triggered when the SQL statement affects
more than 50 rows. You can adjust the value of threshold according to your needs. You
can only set different threshold values at the database level.
To set the threshold for a specific database, specify the <database> parameter. Use
<ALL> to set the threshold for the whole HADR environment.
The <value> parameter defines the minimum number of rows affected by a SQL
statement when the SQL statement replication is triggered.
display
Displays SQL statement settings, such as the value of threshold and the tables that are
enabled or disabled with SQL statement replication.
To display settings for a specific database, specify the <database> parameter. Use
<All> to display settings for the whole HADR environment.
Example
Example 1
This example replicates update and delete statements as SQL statements for the ERP database:
TASKNAME TYPE
VALUE
--------------- ---------------------
-------------------------------------------------
SQL Replication Start Time Thu Sep 13 02:24:50 UTC
2018
SQL Replication Elapsed Time
00:00:01
SQLReplication Task Name SQL
Replication
SQLReplication Task State
Completed
SQLReplication Short Description Toggle SQL Replication in the
system
SQLReplication Long Description Enable the SQL Replication on Replication
Server.
SQLReplication Current Task Number
2
SQLReplication Total Number of Tasks
2
SQLReplication Task Start Thu Sep 13 02:24:50 UTC
2018
SQLReplication Task End Thu Sep 13 02:24:51 UTC
2018
SQLReplication Hostname
rmazwang2site0.mo.sap.corp
Example 2
This example disables the replication of delete and insert select statements as SQL statements for all
databases:
Example 3
This example replicates update, delete, and insert select statements as SQL statements for specific tables:
TASKNAME TYPE
VALUE
--------------- ---------------------
-------------------------------------------------
SQL Replication Start Time Thu Sep 13 02:28:33 UTC
2018
SQL Replication Elapsed Time
00:00:00
SQLReplication Task Name SQL
Replication
SQLReplication Task State
Completed
SQLReplication Short Description Toggle SQL Replication in the
system
SQLReplication Long Description Enable the SQL Replication on Replication
Server.
SQLReplication Current Task Number
2
SQLReplication Total Number of Tasks
2
SQLReplication Task Start Thu Sep 13 02:28:33 UTC
2018
SQLReplication Task End Thu Sep 13 02:28:33 UTC
2018
SQLReplication Hostname
rmazwang2site0.mo.sap.corp
(11 rows affected)
Example 4
Example 5
This example triggers SQL statement replication when the SQL statement affects more than 50 rows for all
databases:
Example 6
This example displays the SQL statement settings for the database ERP:
This example displays the SQL statement settings for the database ERP_1:
Example 8
This example displays the SQL statement settings for the database ERP:
Usage
The following table describes the output columns when specifying the <display> parameter:
Column Description
DB_NAME The name of the database to be displayed with the corresponding SQL statement settings.
THRESHOLD The value of threshold that is being set for the indicated database. The threshold value is the
minimum number of rows that affected by a SQL statement when SQL statement replication
is triggered.
● U – update
● D – delete
● I – insert select
● S – select into
● All – SQL statement replication for all tables are enabled for the corresponding DML op
eration. The TABLE_LIST column does not show table details.
● None – SQL statement replication for all tables are disabled for the corresponding DML
operation. The TABLE_LIST column does not show table details.
● In-List – the TABLE_LIST column lists tables that are enabled with SQL statement rep
lication for corresponding DML operation.
● Out-List – the TABLE_LIST column lists tables that are disabled with SQL statement
replication for corresponding DML operation.
TABLE_LIST The list of tables that are enabled or disabled with SQL statement replication for correspond
ing DML operation.
Related Information
12.1.31 sap_status
Use the sap_status command to monitor the detailed status of a replication path, such as source and target
ASE connection, source and target Replication Server connection, source ASE Replication agent, source
Replication Server route thread (RSI) to target Replication Server, and target replication DSI thread to target
ASE.
Syntax
Parameters
task
Examples
Example 1
The sap_status active_path displays the replication paths that are active from the primary site to the
standby site.
Syntax
sap_status active_path
Examples
Example 1
Displays the replication paths that are active from the primary site to the standby site:
sap_status active_path
go
Usage
Synchronization Mode The replication synchronization mode you have configured between a database and
the SAP Replication Server, which can be one of:
● Synchronous
● Asynchronous
Synchronization State The current replication synchronization mode between a database and the SAP Rep
lication Server, which can differ from the mode you have configured.
Note
The synchronization state returned by the sap_status active_path
command represents the state of all databases that are replicated by the pri
mary site. If the synchronization state of the different databases is not the same
(for example, if one database is in the 'synchronous' state and another is in the
'asynchronous' state), the result displayed by the sap_status
active_path command for the site is 'Inconsistent', indicating the databases
do not all have the same synchronization state at this time.
Distribution Mode The replication distribution mode you have configured between a database and the
Replication Server, which can be one of:
● Local
● Remote
Replication Server Status The status of the Replication Server, which can be one of:
● Active
● Down
● Unknown
State The status of the replication path, which can be one of:
Latency Time The timestamp of the most recent trace command that was applied to the tar
get database and used for the latency calculation.
Latency The approximate length of time it takes for an update on the source system to reach
the target, based on the last trace command sent.
Commit Time The local timestamp of a command applied to the target database.
Distribution Path The logical host name of the distribution target server.
Drain Status The status of draining the primary database server's transaction logs. Values are:
● Drained: The primary database server's transaction logs are completely trans
ferred to Replication Server.
● Not Drained: The primary database server's transaction logs are only partially
transferred to Replication Server.
● Unknown: The status cannot be queried.
Note
To get the <Latency Time>, <Latency> and <Commit Time> parameter values, first execute the
sap_send_trace <primary logical host name> command, then execute the sap_status
active_path command.
The sap_status path command monitors information on the replication modes you have configured, the
current replication states in the HADR with DR node environment, distribution mode and path, Replication
Server status, and latency.
Syntax
sap_status path
Examples
Example 1
Monitors and returns the information on the replication modes you have configured, the current replication
states in the HADR with DR node environment, distribution mode and path, Replication Server status, and
latency:
sap_status path
go
Usage
Information Description
Start Time The time point that the command starts to run.
Synchronization Mode One of two replication synchronization modes you have configured between a database and the
SAP Replication Server:
● Synchronous
● Asynchronous
Synchronization State The current replication synchronization mode between a database and the SAP Replication
Server, which can be different from the mode you have configured.
Note
The synchronization state returned by the sap_status path command represents the
state of all databases that are replicated by the primary site. If the synchronization state of
the different databases is not the same (for example, if one database is in the synchronous
state and another is in the asynchronous state), the result displayed by the sap_status
path command for the site is Inconsistent - indicating the databases do not all have
the same synchronization state at this time.
Distribution Mode One of two replication distribution modes you have configured between a database and the Repli
cation Server:
● Local
● Remote
Replication Server Sta The status of the Replication Server, which can be one of:
tus
● Active
● Down
● Unknown
State The status of the replication path, which can be one of:
Latency Time The timestamp of the most recent trace command that was applied to the target database and
used to calculate latency.
Latency The approximate length of time it takes for an update on the source system to reach the target,
based on the last trace command sent.
Note
During bulk materialization, the Replication Server holds the transactions in the outbound
queue (OBQ) until the subscription marker is processed. The sap_status path com
mand may report some latency in replication during this time. It can be ignored as it is just a
difference between the previous rs_ticket and the current time.
The rs_ticket stored procedure works with replicate database stored procedure
rs_ticket_report to measure the amount of time it takes for a command to move from
the primary database to the replicate database.
Commit Time The local timestamp of a command applied to the target database.
Distribution Path The logical host name of the distribution target server.
Drain Status The status of draining the primary database server's transaction logs. Values are:
● Drained: The primary database server's transaction logs are completely transferred to Repli
cation Server.
● Not Drained: The primary database server's transaction logs are only partially transferred to
Replication Server.
● Unknown: The status cannot be queried.
Note
To get the <Latency Time>, <Latency> and <Commit Time> parameter values, first execute the
sap_send_trace <primary logical host name> command, then execute the sap_status
active_path command.
Monitors the estimated minimum failover time, Replication Server device size, simple persistent queue (SPQ)
size, usage, backlog, replication truncation backlog (inbound queue and outbound queue), replication route
queue truncation backlog, SAP ASE transaction log size and backlog, as well as stable queue backlogs.
Syntax
sap_status resource
Example 1
sap_status resource
go
sap_status resource
go
NAME TYPE
VALUE
------------- -------------------------------------------------
Usage
Estimated Minimum Failover The failover time estimated by the sys In the following conditions, the value is -1:
Time tem.
● Replication Server has recently started and
initialization is still underway.
● The data server interface (DSI) thread in the
Replication Server is inactive.
● DR Agent has communication errors with
Replication Server.
Replication device size (MB) The disk space allocated for the Repli Displays "Unable to monitor the replication devi
cation Server. ces" if the Replication Server cannot be reached.
Replication device usage (MB) The disk space used by the Replication Displays "Unable to monitor the replication devi
Server. ces" if the Replication Server cannot be reached.
Note
If the device usage percentages re
turned from the command are
high, consider adding device space
to the replication paths to reduce
the risk that the primary ASE
transaction log will run out of
space.
Replication simple persistent The disk space allocated for the simple Displays "Unable to monitor the replication devi
queue size (MB) persistent queue. ces" if the Replication Server cannot be reached.
ASE transaction log size (MB) The disk space allocated for saving the Displays "Unable to monitor the ASE transaction
transaction logs in the primary SAP log" if the primary SAP ASE cannot be reached.
ASE.
ASE transaction log backlog The accumulated logs to be processed Displays "Unable to monitor the ASE transaction
(MB) in the primary SAP ASE. log" if the primary SAP ASE cannot be reached.
Replication simple persistent The accumulated logs to be processed Displays "Unable to monitor the replication
queue backlog (MB) in the simple persistent queue. queues" if the Replication Server cannot be
reached.
Replication inbound queue The accumulated logs to be processed Displays "Unable to monitor the replication
backlog (MB) in the inbound queue. queues" if the Replication Server cannot be
reached.
Replication route queue back The accumulated logs to be processed Displays "Unable to monitor the replication
log (MB) in the route queue. queues" if the Replication Server cannot be
reached.
Replication outbound queue The accumulated logs to be processed Displays "Unable to monitor the replication
backlog (MB) in the outbound queue. queues" if the Replication Server cannot be
reached.
Replication queue backlog The sum of the simple persistent queue Displays "Unable to monitor the replication
(MB) backlog, inbound queue backlog, and queues" if the Replication Server cannot be
outbound queue backlog. reached.
Replication truncation backlog The data in the Replication Server Displays "Unable to monitor the replication
(MB) queues inbound queue (IBQ), outbound queues" if the Replication Server cannot be
queue -(OBQ), and route queue (RQ) reached.
that cannot be truncated.
The sap_status route command monitors the sequence of queues, threads, and servers that the data is
transacting in the replication path.
Syntax
sap_status route
Examples
Example 1
Returns information about the queues, threads, and servers:
sap_status route
go
sap_status route
go
PATH SEQUENCE NAME TYPE QID SPID
SITE STATE BACKLOG
------------ ---------- ---------- ---------- ---------- ----------
---------- --------------------- -------
PR.DR.master 1 ASE S NULL 58312
site0 Active 0
PR.DR.master 2 RAT T NULL 63
site0 Active NULL
PR.DR.master 3 RATCI T NULL NULL
site1 Active (Active) NULL
PR.DR.master 4 SPQ Q 106 NULL
site1 NULL 0
PR.DR.master 5 CAP T NULL 53
site1 Active (Awaiting Command) NULL
PR.DR.master 6 SQM T NULL 22
site1 Active (Awaiting Message) NULL
PR.DR.master 7 IBQ Q 106 NULL
site1 NULL 0
PR.DR.master 8 SQT T NULL 73
site1 Active (Awaiting Wakeup) NULL
Usage
Sequence The order number of the current queue, thread or server in the sequence. See
the Result Set Row Description table, below, for detailed information.
● T - Thread
● Q - Queue
● S - Server
SPID The ID number of the current thread or the process ID of the server.
Site The host name of the server in which the thread or queue is located.
● Active
● Down
● NULL - represents SQL<NULL>, which means the information cannot be
queried.
Note
Threads also have some other specific states.
Backlog The accumulated logs to be processed. Displays 0 when there are no logs to be
processed. Displays NULL when the information cannot be queried.
Note
Backlogs are only available for queues and the primary ASE, so NULL is
displayed for threads and the standby ASE.
2 RAT Replication Agent thread - read and analyze the transaction logs of the primary
SAP ASE.
8 SQT Stable queue transaction: Sort logs according to the commit time.
10 SQM (Only for local distribution mode) Stable queue management - manage route
queue.
12 RSI (Only for local distribution mode) Replication Server interface - the interface be
tween Replication Servers.
15 DSI Data server interface: The interface that connects to the standby database.
Syntax
sap_status spq_agent
Examples
Example 1
Displays information about the SPQ Agent. In this example the participating databases are; master, PI2,
and db1. Database db1 is configured for SPQ Agent with ACTIVE status from remote connection
PI2_RP_R2 and with INACTIVE status from remote connection PI2_HA_R1. The local connections
(PI2_PR and PI2_HA) of db1 are not configured for SPQ Agent.
sap_status spq_agent
go
Usage
State The state of the SPQ, which can be either active or inactive:
● Active: Indicates that the SPQ Agent, is configured and the external replication
is functional on this path.
● Inactive: Indicates that the external replication is not functional on this path.
Backlog The size of the SPQ agent backlog. If the backlog size is not available, it indicates
that the SPQ Agent (external replication) is not configured on this path.
Use the sap_status synchronization command to monitor information on the replication modes you
have configured, and the current replication states in the HADR environment.
Syntax
sap_status synchronization
Examples
Example 1
Monitors and returns the information on the replication modes you have configured, and the current
replication states in the HADR environment:
sap_status synchronization
go
Information Description
Synchronization Mode The replication synchronization mode you have configured between a database and the SAP Rep
lication Server, which can be one of:
● Synchronous
● Asynchronous
Synchronization State The current replication synchronization mode between a database and the SAP Replication
Server, which can be different from the mode you have configured. Also provides the synchroniza
tion state for each database.
Note
The synchronization state returned by the sap_status synchronization command
represents the state of all databases that are replicated by the primary site. If the synchroni
zation state of the different databases is not the same (for example, if one database is in the
'synchronous' state and another is in the 'asynchronous' state), the result displayed by the
sap_status synchronization command for the site is 'Inconsistent' — indicating the
databases do not all have the same synchronization state at this time.
12.1.32 sap_suspend_component
Syntax
Parameters
<cpnt_name>
Specifies the name of the component to be suspended.
<src_hostname>
Specifies the source host name.
Examples
Example 1
sap_status route
Example 2
Suspends the inactive RAT component from site “PR” to “HA” on database “master”:
Example 3
Example 4
Suspends the “CAP” component from site “HA” to “PR” on database “master”.
● RAT
● RATCI
● CAP
● DIST
● RSI
● DSI
● ALL
Set the cpnt parameter to ALL to suspend all the supported components.
Note
When you set the <cpnt> parameter to ALL, RMA initiates a batch job to suspend all the supported
components. However, the RSI component is not suspended as suspending the RSI component on one
path may affect some other too.
12.1.33 sap_suspend_replication
Syntax
Parameters
<standby_logical_host_name>
Specifies the standby logical host name.
all
Suspends replication to all SAP databases.
<database_name>
Suspends replication to the specified database.
Tearing down a replication environment includes disabling replication in the SAP ASE servers, stopping the SAP
Replication Servers, and deleting all directories and files created during setup, including the SAP Replication
Server instances.
Use the sap_teardown command to tear down the replication environment. The command does not modify
any data that has been replicated to the standby databases. Additionally, the databases on both the primary
and standby hosts are marked for replication. The command does not remove any software, but it does remove
SAP Replication Servers and the configurations that support replication.
Syntax
sap_teardown
Examples
Example 1
Tears down the replication environment:
sap_teardown
go
TASKNAME TYPE
VALUE
--------------------- -----------------
-----------------------------------------------------
Tear Down Replication Start Time Mon Nov 23 05:24:25 EST
2015
Tear Down Replication Elapsed Time
00:00:22
TearDownRS Task Name Tear Down
Replication
TearDownRS Task State
Completed
TearDownRS Short Description Tear down the Replication
Environment
TearDownRS Long Description Tear Down of the Replication
environment is complete.
TearDownRS Task Start Mon Nov 23 05:24:25 EST
2015
TearDownRS Task End Mon Nov 23 05:24:47 EST
2015
TearDownRS Hostname
site1
Tear Down Replication Start Time Mon Nov 23 05:23:41 EST
2015
Tear Down Replication Elapsed Time
00:00:44
TearDownRS Task Name Tear Down
Replication
Usage
● Stops the Replication Server and deletes its instance directory, partition files, simple persistent queue
directories, and kills all Replication Server related processes.
● Demotes the source SAP ASE, if the source host (the machine on which SAP ASE runs) is available.
● Drops all servers from the HADR server list on both SAP ASE servers.
● Drops the HADR group from both servers.
● Disables HADR on both servers.
● Disables CIS RPC Handling.
12.1.35 sap_tune_rat
Tunes and configures the Replication Agent thread for SAP ASE (RepAgent for short) in an HADR environment.
Syntax
Parameters
<database> | all
Specifies the name of the database to be tuned. Specifies all for RMA to tune all the
participating databases with the same memory input.
<memory_limit>
Specifies the memory limit in SAP Replication Server that the RepAgent connects to.
The valid range is 4 GB to 256 GB.
Example 1
Execute the command to tune the RepAgent that connects to an SAP Replication Server, with the memory
limit set to 8:
sap_tune_rat PI2, 8
go
Usage
Although RMA automatically runs sap_tune_rat when you add a database in the HADR system, you can
manually configure the parameters if the automatic execution of the command fails.
To view the configuration parameters, use the admin config Replication Server command. For more details,
see the Replication Server Reference Manual.
Table 16: ASE Configuration Parameters for Both the Primary and Standby ASE that Configured RAT, Within an HA Cluster
(Primary and Standby)
replication agent memory 51200 + (<USER_DB_1><stream buffer size> * <buffer pool size>) +
size <USER_DB_2><stream buffer size> * <buffer pool size>)+ … +
<USER_DB_N><stream buffer size> * <buffer pool size>) / 2048
+ 16384 * <number of replicate databases on non-Windows
platforms>
<USER_DB_N> refers to all the participating databases in the HADR system, ex
cluding the master database.
Note
RepAgent memory is tuned only when the current memory is less than the
calculated value. If the current memory is larger than the calculated value,
RepAgent does not change its memory.
max memory The max memory value varies according to the replication agent
memory value. If the replication agent memory value decreases, the
value of max memory remains the same. If the replication agent
memory increases, additional memory is added to max memory.
12.1.36 sap_tune_rs
Specifies the number of the CPUs and the maximum size of the memory to tune the Replication Server
instance after the HADR replication system is set up and Replication Server instances have been created on all
the hosts.
Syntax
Parameters
<logical_host_name>
Specifies the logical host name of the primary Replication Server to be configured.
<memory_limit>
Specifies the available memory limit.
<cpu_number>
Specifies the available number of CPUs.
Examples
Example 1
Tunes SAP Replication Server on logical host site1 with 4 GB of memory and 2 CPUs:
sap_tune_rs site1,4,2
go
TASKNAME TYPE VALUE
----------------------- -----------------
---------------------------------------
Tune Replication Server Start Time Wed Apr 01 05:06:05 EDT 2015
Tune Replication Server Elapsed Time 00:00:33
TuneRS Task Name Tune Replication Server
TuneRS Task State Completed
TuneRS Short Description Tune Replication Server configurations.
TuneRS Task Start Wed Apr 01 05:06:05 EDT 2015
TuneRS Task End Wed Apr 01 05:06:38 EDT 2015
TuneRS Hostname site0
Example 2
Displays the configurations applied on the SAP Replication Server on logical host site1 with 50 GB of
memory and 8 CPUs:
Usage
● Starting from 16.0 SP03 PL04, sap_tune_rs can only tune the memory_limit and memory_control
parameters if smart memory control is enabled on SAP Replication Server.
See Smart Memory Control for details.
● If the sap_tune_rs command fails, you can manually configure the parameters. To view the configuration
values, use the admin config Replication Server command. For more details, refer to the Replication
Server Reference Manual.
● (Only for Linux with numactl command) The <cpu_number> parameter affects the Replication Server
instance run file by binding the Replication Server processes running on specific CPUs with the numactl
system command. The Replication Server run file with the appending numactl command is saved in the
$SYBASE/<CID_database_name>_REP_<logical_host_name>/
RUN_<CID_database_name>_REP_<logical_host_name>.sh directory. SAP Replication Server run
file without the appending numactl command is saved as a backup in the same directory with the name
RUN_<CID_database_name>_REP_<logical_host_name>.sh.prev. The RMA only backs up the run
scripts in which there are no lines appended with CPU number numactl.
For example, the RUN_PI2_REP_site1.sh run script is updated if the RS instance directory is /
sybase/DM/PI2_REP_site1.
By default, the Replication Server is bound with CPUs starting with CPU0. To bind the SAP Replication
Server to other CPUs, modify the <USER_BIN_DIRECTORY>/numactl --physcpubind=0-1 line of the
run script.
The RMA does not modify the Replication Server run file or create a backup of the .prev file if the
numactl command is not found under the following <USER_BIN_DIRECTORY> directories:
○ /usr/bin
○ /usr/local/bin
○ /usr/sbin
○ /bin
○ /sbin
Syntax
Parameters
add_db
Adds the specified database to the HADR replication system.
<database_name>
Specifies the name of the database.
remove_db
Removes the specified database from the HADR replication system.
add_additional_device
Indicates that you are adding a device to the HADR system.
<logical_host_name>
Name of the host on which you are adding the device.
<device_directory>
Path to the directory in which you are creating the device.
<device_size>
Size of the device.
Use the sap_update_replication add_db command to add or remove a database from the HADR
replication system.
Syntax
<database_name>
The name of the database to be added or removed.
add_db
Adds a database to the HADR replication system.
remove_db
Removes a database from the HADR replication system.
Examples
Example 1
sap_update_replication remove_db,PI2
go
TASKNAME TYPE
VALUE
------------------ ---------------------
------------------------------------------------------------------------------
-------------------
Update Replication Start Time
Fri Nov 20 02:18:17 EST 2015
Update Replication Elapsed Time
00:00:02
DRExecutorImpl Task Name
Update Replication
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Update configuration for a currently replicating site.
DRExecutorImpl Long Description
Started task 'Update Replication' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine when
task 'Update
Replication' is complete.
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Running
UpdateReplication Short Description
Update configuration for a currently replicating site.
UpdateReplication Long Description
Disabling the incoming replication data for database 'PI2'.
UpdateReplication Current Task Number
2
UpdateReplication Total Number of Tasks
10
UpdateReplication Task Start
Fri Nov 20 02:18:17 EST 2015
UpdateReplication Hostname
sap_status task
go
TASKNAME TYPE
VALUE
----------------- ---------------------
------------------------------------------------------------------------------
--
Status Start Time
Fri Nov 20 02:18:17 EST 2015
Status Elapsed Time
00:04:58
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Completed
UpdateReplication Short Description
Update configuration for a currently replicating site.
UpdateReplication Long Description
Update replication request to remove database 'PI2'
completed succ
essfully.
UpdateReplication Current Task Number
10
UpdateReplication Total Number of Tasks
10
UpdateReplication Task Start
Fri Nov 20 02:18:17 EST 2015
UpdateReplication Task End
Fri Nov 20 02:23:15 EST 2015
UpdateReplication Hostname
site0
(11 rows affected)
sap_update_replication remove_db,PI2
go
TASKNAME TYPE
VALUE
------------------ -----------------------------
------------------------------------------------------------------------------
--------
Update Replication Start Time
Fri Nov 20 02:24:52 EST 2015
Update Replication Elapsed Time
00:00:00
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Error
UpdateReplication Short Description
Update configuration for a currently replicating site.
Example 2
sap_update_replication add_db,PI2
go
TASKNAME TYPE
VALUE
-------------------------------------- ---------------------
------------------------------------------------------------------------------
-------------------
Update Replication Start Time
Fri Nov 20 02:26:11 EST 2015
Update Replication Elapsed Time
00:00:02
DRExecutorImpl Task Name
Update Replication
DRExecutorImpl Task State
Running
DRExecutorImpl Short Description
Update configuration for a currently replicating site.
DRExecutorImpl Long Description
Started task 'Update Replication' asynchronously.
DRExecutorImpl Additional Info
Please execute command 'sap_status task' to determine
when task 'Update
Replication' is complete.
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Running
UpdateReplication Short Description
Update configuration for a currently replicating site.
UpdateReplication Long Description
Add database 'PI2' into Replication Servers.
UpdateReplication Current Task Number
1
UpdateReplication Total Number of Tasks
3
UpdateReplication Task Start
Fri Nov 20 02:26:11 EST 2015
UpdateReplication Hostname
site0
AddASEDatabaseForDisasterRecovery Task Name
Add Database to Replication
AddASEDatabaseForDisasterRecovery Task State
Running
AddASEDatabaseForDisasterRecovery Short Description
Add an ASE database to the Replication System for
Disaster Recovery sup
port.
Usage
● After a new database is added to the HADR replication system, use the sap_materialize command to
materialize the database with the current primary database.
● After a database is removed from the HADR replication system, the replication path between them is
uninstalled and no synchronization happens.
● This command cannot add or remove SAP ASE system databases, including the master database, which is
needed for database logins and user information synchronization.
● This command cannot add or remove SAP CID databases.
Use the sap_update_replication distribution_mode command to change the distribution mode for
the logical host.
Syntax
Parameters
<source_logical_host_name>
Specifies the logical host name of the source.
local
Changes the distribution mode of the site to local.
remote
Changes the distribution mode of the site to remote.
<target_logical_host_name>
Specifies the logical host name of the target.
Examples
Example 1
sap_update_replication distribution_mode,site0,local
go
TASKNAME TYPE
VALUE
------------------ ---------------------
----------------------------------------------------------------
Update Replication Start Time
Fri Nov 20 02:56:53 EST 2015
Update Replication Elapsed Time
00:00:33
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Example 2
Finds the error from row where TYPE value is Failing Command Error Message:
TASKNAME TYPE
VALUE
------------------ -----------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
----------------------------------------
Update Replication Start Time
Fri Nov 20 03:09:36 EST
2015
00:00:00
Error
UpdateReplication Hostname
site0
Syntax
Variable declaration:
<logical_host_name>
Specifies the name of the host on which you are changing the synchronization mode.
Examples
Example 1
TASKNAME TYPE
VALUE
------------------ -----------------
------------------------------------------------------------------------------
-------------
Update Replication Start Time
Fri Nov 20 03:24:52 EST 2015
Update Replication Elapsed Time
00:00:22
UpdateReplication Task Name
Update Replication
UpdateReplication Task State
Completed
UpdateReplication Short Description
Update configuration for a currently replicating site.
UpdateReplication Long Description
Successfully submitted the design changes for local host
'site0' for se
rver on host 'null'.
UpdateReplication Task Start
Fri Nov 20 03:24:52 EST 2015
UpdateReplication Task End
Fri Nov 20 03:25:14 EST 2015
UpdateReplication Hostname
site0
(9 rows affected)
Syntax
Parameters
add_additional_device
Indicates that you are adding a device to the HADR system.
<logical_host_name>
Specifies the logical host name of the SAP Replication Server on which the device
space is to be added.
<logical_device_name>
Specifies the logical name of the device to be added.
<device_file>
Specifies the path to the directory in which you are creating the device and the name of
the device file.
<device_size>
Specifies the size of the device.
Examples
Example 1
Add 20 MB device space named part9 on site0 SAP Replication Server under /testenv7/
partition9.dat:
Example 2
If the command fails, find the error from the row where the TYPE value is Failing Command Error
Message:
Related Information
The sap_upgrade_server RS and sap_upgrade_server ASE commands are used while you upgrade the
Replication Server and SAP ASE respectively.
Syntax
Note
See the Configuration Guide for information about upgrading Replication Server.
Parameters
SRS
Upgrades the Replication Server.
Note
ASE
Upgrades the SAP ASE server.
start
Starts the upgrade. Execute this command before you install the new server release.
finish
Finishes the upgrade. Execute this command after you install the new server release.
<upgrade_logical_hostname>
Specifies the logical host name where the server to be upgraded is present.
suspend
Indicates the replication path has been suspended. No ticket is sent to verify the
replication status during the upgrade process.
Example 1
1. At the RMA located at the same site as the standby Replication Server, execute:
Shutdown
go
Example 2
Upgrades the standby SAP ASE:
1. At the RMA located at the same site as the standby SAP ASE server, execute:
2. Shut down the SAP ASE server and Backup Server for the site:
Shutdown
go
3. Install the new SAP ASE release and upgrade the site1 SAP ASE including starting the upgraded data
server and Backup Server, and running the post-installation tasks.
4. Start the new RMA and execute:
12.1.39 sap_verify_replication
Use the sap_verify_replication command to verify if you can change the synchronization mode or the
distribution mode of a running HADR system. After the check is passed, you can execute the
Syntax
Parameters
<logical_host>
Specifies the logical host name to be changed.
sync
Specifies that the transaction commit is blocked until the log records of the transaction
are received and stored persistently in the standby memory.
async
Specifies that the transaction commit is blocked until the log records of the transaction
are received in the standby memory.
ltl
Specifies that the transaction is transferred by the Log Transfer Language.
<remote_logical_host>
Specifies the remote logical host name when changing the distribution_mode to
remote.
Examples
Example 1
TASKNAME TYPE
VALUE
------------------ -----------------
------------------------------------------------------
Verify Replication Start Time
Sun Nov 15 22:28:13 EST 2015
Verify Replication Elapsed Time
00:00:00
If the command fails, finds the error from the row where TYPE value is Failing Command Error
Message:
TASKNAME TYPE
VALUE
------------------ -----------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
---------------------------------------------------
Verify Replication Start Time
Mon Nov 16 04:34:49 EST
2015
00:00:00
Error
VerifyReplication Hostname
site0
Example 2
Changes the site0 distribution mode to remote and specifies the remote target as site1:
sap_verify_replication distribution_mode,site0,remote,site1
go
TASKNAME TYPE
VALUE
------------------ -----------------
------------------------------------------------------
Verify Replication Start Time
Sun Nov 15 22:27:15 EST 2015
Verify Replication Elapsed Time
00:00:02
VerifyReplication Task Name
Verify Replication
VerifyReplication Task State
Completed
VerifyReplication Short Description
Verify configuration for a currently replicating site.
VerifyReplication Task Start
Sun Nov 15 22:27:15 EST 2015
VerifyReplication Task End
Sun Nov 15 22:27:17 EST 2015
VerifyReplication Hostname
site0
(8 rows affected)
12.1.40 sap_version
Use the sap_version command to display the version of the DR agent plug-in. The all keyword displays a list
of all the servers known and usable by the DR Agent in this replication environment and their versions.
Syntax
sap_version [all]
Use the sap_xa_replication command to enable and disable replication of distributed transactions.
Syntax
sap_xa_replication {on|off}
Parameter
on
Enables replication of distributed transactions.
off
Disables replication of distributed transactions.
Example
Example 1
sap_xa_replication on
go
TASKNAME TYPE VALUE
-------------- ----------------- -----------------------------------
XA Replication Start Time Thu Dec 03 09:45:29 UTC 2020
XA Replication Elapsed Time 00:00:00
XAReplication Task Name XA Replication
XAReplication Task State Completed
XAReplication Short Description Toggle XA Replication in the system
XAReplication Long Description Enable the XA Replication on ASE.
XAReplication Task Start Thu Dec 03 09:45:29 UTC 2020
XAReplication Task End Thu Dec 03 09:45:29 UTC 2020
XAReplication Hostname site0
(9 rows affected)
Example 2
sap_xa_replication off
go
TASKNAME TYPE VALUE
-------------- ----------------- -----------------------------------
XA Replication Start Time Thu Dec 03 09:45:37 UTC 2020
XA Replication Elapsed Time 00:00:01
XAReplication Task Name XA Replication
(9 rows affected)
Usage
● Restart of the primary and standby SAP ASE is needed for the command to take effect. Stop the Fault
Manager before the restart if it is configured to initiate a failover.
Although you should use the SAP ASE Cockpit and the RMA, you may need to occasionally use
sp_hadr_admin to manage and monitor the HADR system from the command line.
Use SAP ASE sp_hadr_admin command instead of RMA commands in the following situations.
If the standby server goes down, and you subsequently restart the primary server, the HADR system starts the
old primary server as a standby server to avoid a split-brain scenario.
You cannot use a RMA command to rectify the situation. Instead, log in to the current primary server as the
system administrator and perform these steps:
1. (If the server retains the standby role, otherwise skip this step) Promote the server to the primary role
(sp_hadr_admin must be run from the master database):
sp_hadr_admin primary
If the execution fails because the standby node is down, verify that neither node running as the primary
and issue:
dbcc dbrecover('<database_name>')
sp_start_rep_agent '<database_name>
sp_hadr_admin activate
Use the sp_hadr_admin deactivate parameter to perform maintenance or other activity on the primary
server without performing a planned failover (from which you want to restrict transactional activities).
sp_hadr_admin deactivate can deactivate the server and demote it to standby, if necessary. The syntax is:
sp_hadr_admin deactivate,'<timeout>','<label>'
sp_hadr_admin standby
Use this syntax to cancel an ongoing deactivation and restore the server to the primary active state:
sp_hadr_admin cancel
Use this syntax if you used the nodrain parameter during the deactivation but must subsequently drain the
transaction logs to Replication Server:
sp_hadr_admin drain,'<label>'
Use this syntax to estimate the amount of time SAP ASE requires to roll back active transactions before you
can issue sp_hadr_admin deactivate or sp_hadr_admin drain to drain the transaction log:
sp_hadr_admin failover_timeestimate
Use this syntax to view the status of the log drain activity on the primary server undergoing deactivation or log
drain:
sp_hadr_admin status
Use this syntax to view the transaction replication status on the standby server:
sp_hadr_admin 'list_application_interface'
sp_hadr_admin 'drop_application_interface','<HADR_server_name>'
Syntax
sp_hadr_admin activate
You can run sp_hadr_admin activate only on the inactive primary server.
● Moves the primary server to an inactive state:
HADR_LABEL_YYYYMMDD_HH:MM:SS:MS
The deactivate parameter triggers a transition from the active to the deactivating state, and then to the
inactive state.
● Stops the deactivation process and moves the server back to an active state:
sp_hadr_admin cancel
You can execute sp_hadr_admin cancel only on primary servers in a deactivating state (that is, while
sp_hadr_admin deactivate is executing).
● Changes the mode of an inactive HADR member from primary to standby mode. If the state of the primary
server is not already inactive, use the deactivate parameter to change the state to inactive before issuing
standby. Use the force parameter to force the mode change if the server is deactivated with no_drain :
HADR_LABEL_YYYYMMDD_HH:MM:SS:MS
● Checks the progress of the deactivation process from primary and standby servers:
● Removes the specified HADR group and stops the local node from participating in an HADR system:
sp_hadr_admin 'drop_application_interface','<HADR_server_name>'
sp_hadr_admin 'list_application_interface'
Parameters
<group_name>
specifies the HADR group you are adding or dropping.
<local_HADR_server_name>
specifies the HADR server name. The default is <@@servername>.
addserver
adds a server to the HADR system and member list.
<HADR_server_name>
is the server you are adding to – or dropping from – the HADR system or member list.
<pname>
is the name specified in the interfaces file for the server named
<HADR_server_name>.
nopropagate
disables automatic propagation of the server name to the member list for standby
servers and to user connections enabled with hadr_list_cap (user connections with
HADR capabilities).
dropserver
drops a server from the member list.
primary
promotes the standby server to primary mode.
force
forces either of the following:
● A standby server to the role of a primary server if some members of the HADR
system are unreachable
● A transition to an inactive state if there are ongoing (unprivileged or otherwise)
transactions when the <timeout> expires
activate
● The string used by the log drain mechanism to mark the transaction logs of those
HADR databases that have been successfully drained by Replication Agent during
the deactivation or drain process
● The string used to retrieve the status of transaction replication on the standby
server. <label> is ignored if you issue sp_hadr_admin status [, <label>]
on the primary server
nodrain
allows deactivation to occur without initiating a log drain. By default, if you do not
include the nodrain parameter, the server initiates a log drain.
<drain_timeout>
is the length of time, in seconds, allocated for the log to drain. The drain is terminated
after the timeout period elapses. If you include <drain_timeout> period with the
deactivate parameter, the server is set to active mode after the timeout period ends.
cancel
aborts the deactivation process.
standby
moves the primary server to standby mode.
drain
drains the transaction log.
status
checks the progress of the deactivation process. The information status reports
depends on whether the server is in primary or standby mode.
dropgroup
drops the HADR group. Do not execute dropgroup until you drop all the HADR
members.
failover_timeestimate
estimates the amount of time it takes to roll back open transactions. Run
sp_hadr_admin failover_timeestimate from the primary server.
<standby_server_name>
Examples
Example 1
Adds the PARISDR server to the member list using the <pname> format <hostname:port>:
Example 3
Drops the PARIS server from the HADR group without propagating the change to the other members in the
group:
Example 4
Attempts to promote the LONDON server to primary mode, but the HADR system cannot connect to the
PARISDR server:
sp_hadr_admin primary
Msg 11206, Level 16, State 1:
Server 'JAIDB', Line 1:
Unable to connect to server 'PARISDR'.
Cannot promote the server to PRIMARY mode due to split brain check error.
Use the 'force' option to force the promotion.
Msg 19842, Level 16, State 1:
Server 'MYSERVER', Procedure 'sp_hadr_admin', Line 531:
'primary' encountered an error and could not succeed.
Example 5
Example 6
sp_hadr_admin activate
(return status = 0)
Command 'event' successful.
(1 row affected)
(0 rows affected)
(return status = 0)
Command 'activate' successful.
(1 row affected)
Example 7
sp_hadr_admin 'deactivate','30','scheduled_offline'
User connections statistics:: 0 in xact, 0 in chained mode, 9 in unchained
mode, 0 holding server side cursors.
Server reached INACTIVE state.
Initiating log drain mechanism.
Command 'deactivate' successful.
(return status = 0)
sp_hadr_admin cancel
Command 'cancel' successful.
(return status = 0)
Example 9
sp_hadr_admin standby
Command 'standby' successful.
(return status = 0)
Example 10
Initiates the log drain using the string scheduled_offline_07092013 as the label:
Example 11
Example 12
sp_hadr_admin failover_timeestimate
total potential rollback time (mins)
------------------------------------
0
(1 row affected)
dbid rep_drain_time
------ --------------
(0 rows affected)
total potential rep drain time
------------------------------
NULL
(1 row affected)
dbid number_of_active_xact longest_elapsed_time
name_of_oldest_xact spid start_time
------ --------------------- --------------------
------------------------------ ------ -------------------------------
4 2 1831
db1_T1 14 Mar 8 2021 2:39AM
6 2 1699
db3_T1 21 Mar 8 2021 4:51AM
5 1 1710
db2_T1 15 Mar 8 2021 4:40AM
(3 rows affected)
Command 'failover_timeestimate' successful.
(return status = 0)
sp_hadr_admin mode
HADR Mode is :
------------------------------------------------------------
Starting
Example 14
sp_hadr_admin state
HADR State is :
------------------------------------------------------------
Inactive
Example 15
Adds the application interfaces on port 30015 to the LONDON and PARIS cluster hosts:
sp_hadr_admin 'add_application_interface','LONDON',lily:30015'
sp_hadr_admin 'add_application_interface','PARIS',daisy:30015'
sp_hadr_admin 'list_application_interface'
name network_name
------------ ----------------------------
LONDON lily:30015
PARIS daisy:30015
Usage
● If you specify <pname>, sp_hadr_admin uses <HADR_server_name>. Use this format to specify the host
name or IP address and port for the <HADR_server_name> server:
○ <hostname:port>
○ <ipaddress:port>
● force does not promote the standby server to a primary server if the HADR system detects an existing
primary server. The administrator first demotes the existing primary server before reissuing the force
parameter.
● The deactivate parameter triggers a transition from the active to the deactivating state, and then to the
inactive state.
● sp_hadr_admin deactivate ignores the <label> parameter when you include the nodrain
parameter.
● If the deactivation cannot complete in the period of time indicated by <timeout_period>, the server
returns to active mode. Monitor the progress of replication by searching for the label 'scheduled
offline' in the Replication Server log.
sp_hadr_admin status
Database Name Log Drain Status Log Pages Left
-------------- ---------------- ----------------------------
db1 completed 0
db2 completed 0
master completed 0
When the HADR system has been set up, a number of proxy tables are created in the master database of the
primary server.
● hadrGetLog
● hadrGetTicketHistory
● hadrStatusActivePath
● hadrStatusPath
● hadrStatusResource
● hadrStatusRoute
12.3.1 hadrGetLog
Syntax
Parameters
_logicalHost
The logical hostname.
_serverType
Type of server. Either RS (Replication Server) or RMA (Replication Management Agent).
_startDate
The start date of the log. The format is 'YYYY-MM-DD HH:MM:SS'
_startDate The start date of the log. The format is 'YYYY-MM-DD HH:MM:SS'.
Examples
Provides information about the local host PR for the specified date.
12.3.2 hadrGetTicketHistory
The hadrGetTicketHistory table is a proxy table that is created after the HADR system is set up. After you log
into the primary ASE with user DR_admin, use the hadrGetTicketHistory command to retrieve multiple
ticket rows (one ticket per row) where pdb_t is bigger than a user specified start date.
pdb_t indicates the time stamp when the ticket was injected in corresponding primary ASE database.
Syntax
Parameters
_logicalHost
The name of logical ASE host, to query ticket information.
_DBName
The name of the database.
_startDate
The start date since when ticket information must be returned, the format is 'YYYY-
MM-DD HH:MM:SS
Examples
Displays ticket information on the HA host for ERP database since '2010-01-01 15:30:00':
The hadrStatusActivePath displays information about the active connection status between local and
remote HADR hosts.
Syntax
Information Description
hadrMode An external mode that is visible to and known by other HADR members, including
"Primary", "Standby", "Disabled", "Unreachable" and "Starting".
hadrState An internal state that is known only by the member, including "Active", "Inactive" and
"Deactivating".
SynchronizationMode The replication synchronization mode you have configured between a database and
the SAP Replication Server, which can be one of:
● Synchronous
● Asynchronous
DistributionMode The replication distribution mode you have configured between a database and the
Replication Server, which can be one of:
● Local
● Remote
State The state of the replication path, including "Active", "Suspended", and so on. Dis
plays only those paths where State is Active in hadrStatusActivePath.
DrainStatus The status of draining the primary database server's transaction logs. Values are:
● Drained: The primary database server's transaction logs are completely trans
ferred to Replication Server.
● Not Drained: The primary database server's transaction logs are only partially
transferred to Replication Server.
● Unknown: The status cannot be queried.
Examples
Displays information about the connection status between local and remote HADR hosts.
Syntax
Columns
hadrMode An external mode that is visible to and known by other HADR members. One of: "Pri
mary", "Standby", "Disabled", "Unreachable", and "Starting".
hadrState Internal state that is known only by the member. One of: "Active", "Inactive", and
"Deactivating".
SynchronizationMode The replication synchronization mode configured between a database and Replica
tion Server. One of:
● Synchronous
● Asynchronous
DistributionMode The replication distribution mode configured between a database and Replication
Server. One of:
● Local
● Remote
State State of the replication path. Set to "Active", "Suspended", and so on.
DrainStatus The status of draining the primary database server's transaction logs. Values are:
● Drained: The primary database server's transaction logs are completely trans
ferred to Replication Server.
● Not Drained: The primary database server's transaction logs are only partially
transferred to Replication Server.
● Unknown: The status cannot be queried.
Examples
The hadrStatusResource command monitors the resources in the Replication Server and ASE that are
known to the DR Agent.
Syntax
EstimatedFailoverTime Estimated amount of time left for the primary server to failover to the standby
server.
Example
12.3.6 hadrStatusRoute
Displays the status of the connection routes between the HADR nodes.
Example
12.3.7 hadrStatusSynchronization
hadrMode An external mode that is visible to and known by other HADR members. One of
● Primary
● Standby
● Disabled
● Unreachable
● Starting
hadrState An internal state that is known only by the member. One of:
● Active
● Inactive
● Deactivating
● Synchronous
● Asynchronous
Example
Replication Server provides commands and parameters that you use in an HADR system.
Replication Server provides parameters for configuring stream replication, simple persistent queue (SPQ),
Capture, and external replication.
Parameter Description
Value: 10 to 10,000
Default: 50
Value: 1 to 1,000,000
Default: 80
The total size of the stream replication buffer pool is ci_pool_size x ci_package_size. The buffer is
allocated when the stream replication stream starts. For example, if you set 50 to ci_pool_size and 1 MB to
ci_package_size, about 50 MB memory is allocated when the stream replication stream starts.
Parameter Description
Value: 20 MB to 100 GB
Default: 2 GB
spq_max_size Specifies the maximum size of an SPQ. The total size of all
SPQ data files cannot exceed the size set by
spq_max_size. When data reach the value of
spq_max_size, SPQ becomes full, a new SPQ is created.
Value: 100 MB to 1 TB
Default: 100 GB
spq_data_file_size Specifies the size of each SPQ data file. If adding the next
message makes the data file larger than the specified size,
SPQ writes the message to the next SPQ data file. SPQ trun
cates or removes the data file after all the messages have
been read by Capture.
Value: 1 MB to 10 GB
Default: 1 GB
Value: on or off
Default: on
spq_cache_size Specifies the SPQ cache size. SPQ cache is a memory pool
for caching en-queue data before they are written on disk
and reading the de-queue data beforehand.
Value: 1 KB to 100 MB
Default: 10 MB
Parameter Description
Default: false
Value: 2 MB to 2 GB
Default: 8 MB
Value: 1 to 20
Default: 2
Value: ci or capture
Default: capture
Parameter Description
Parameter Description
Default: false
Replication Server adds new parameters into some commands to monitor states for stream replication,
Capture, and SPQ.
Displays disk space usage information of a specific SPQ or SPQs for the Capture path.
Syntax
Parameters
mb
Prints backlog information for the Capture. The Backlog column displays the value of
backlog.
spq
Displays disk usage information about the simple persistent queues for the Capture
path.
<dsname>
Specifies the name of the primary data server for the simple persistent queue.
<dbname>
Specifies the name of the primary database for the simple persistent queue.
Examples
Example 1
Displays information for an SPQ:
Table 27: Column Descriptions for the output of admin disk_space, mb, spq
Total Size
Used Size Total size in megabytes currently used by the Replication Server.
Packages The number of packages which are not read out by Capture; the backlog of the SPQ for the
Capture path.
Displays disk space usage and backlog information of a specific SPQ or SPQs for the SPQ Agent path.
Syntax
Parameters
mb
Prints backlog information for the SPQ Agent. The Backlog column displays the value of
backlog.
spqra
Displays disk usage information about the simple persistent queue for the SPQ Agent.
<dsname>
Specifies the name of the primary data server for the SPQ Agent.
<dbname>
Examples
Example 1
Displays backlog and other information for an SPQ Agent:
Usage
Table 28: Column Descriptions for the output of admin disk_space, mb, spqra
Used Size Total size, in megabytes, currently used by the Replication Server.
Packages The number of packages that are not read out by the SPQ Agent; the backlog of SPQ for the
SPQ Agent.
Displays information and statistics related to the SPQ writer and readers (Capture and SPQ Agent) for a
specific SPQ or all SPQs.
Syntax
Parameters
spq
Displays information for SPQ.
<ds>
Specifies the name of the primary data server for the SPQ.
<db>
Specifies the name of the primary database for the SPQ.
<dbid>
Specifies the database ID.
Examples
Example 1
Displays detailed information, including both SPQ writer and readers for a specific SPQ:
Example 2
Usage
● This command supports the monitoring of multiple SPQ readers, including Capture and SPQ Agent.
● If you run the command:
○ without any parameters, it prints the results of the SPQ writer, and the Capture reader
○ with the <ds.dbname> or <dbid> parameters, it prints detailed results which include both SPQ writer
and SPQ readers statistics.
Syntax
Examples
Example 1
Syntax
Parameters
cap
Displays information for Capture components.
<ds>
Name of the primary data server for the Capture component.
<db>
Name of the primary database for the Capture component.
<dbid>
Specifies the database ID.
Examples
Example 1
Usage
Pending Writes The number of pending IBQ writes requested by this Cap
ture.
Pending Bytes The number of bytes pending to write into IBQ requested by
this Capture.
Pending Max The maximum number of pending bytes to write to the IBQ.
The output of this option is configured by
cap_sqm_write_request_limit.
Last OQID Received The OQID of the last command this Capture receives.
Last OQID Moved The truncation point this Capture requests to move last
time.
Last OQID Delivered The OQID of the latest command which is persisted in IBQ.
Displays information for a specific stream replication stream or all stream replication streams.
Syntax
Parameters
ci
Displays information for stream replication streams.
<ds>
Specifies the name of the primary data server for the stream replication stream.
<db>
Specifies the name of the primary database for the stream replication stream.
<dbid>
Specifies the database ID.
Examples
Example 1
Usage
● Sync
● Async
Displays information about the current version of the Component Interface (CI) in use by the SAP Replication
Server. If you intend to use stream replication in the Replication Server, CI must be enabled. This command
does not accept any additional parameters.
Syntax
admin version, ci
Parameters
ci
Displays version information of stream replication streams. Replication Server works
even without a CI (without the function of stream replication), but under this scenario,
running the command does nothing.
Examples
Example 1
admin version, ci
go
CI Library
Version
----------------------------------------------------------------------------
SAP CI-Library/15.7.1/EBF 26750 SP306 rs1571sp306/CI 1.7.1/Linux AMD64/Linux
2.6.18-164.el5 x86_64/1/DEBUG64/Thu Jan 5 22:23:09
2017
----------------------------------------------------------------------------
Info Negotiated Version
----------------------------------------------------------------------------
mo0_13339.tdb1 1.7
mo0_13339.tdb2 1.7
----------------------------------------------------------------------------
Syntax
Suspend Capture:
Parameters
<dsname>
Specifies the name of the primary data server for the Capture.
<dbname>
Specifies the name of the primary database for the Capture.
all
Stops all Captures.
disable replication
Disables replication in an HADR environment.
Usage
● The suspend capture command has no impact on the connection between Replication Agent and
Replication Server. If the log transfer is not suspended and the Replication Agent is active, it can still send
data to SPQ after suspending the Capture.
● When you use <dsname.dbname>, the command stops the active path from replicating downstream in the
HADR replication path. This, while keeping the replicate data from the SAP ASE Replication Agent to an
external system.
Syntax
Parameters
<dsname>
Specifies the name of the primary data server for the Capture.
<dsname>
Specifies the name of the primary database for the Capture.
all
Starts all Captures.
During replication, the Replication Server receives data from the Replication Agent, and writes it to the SPQ file.
If the Replication Server encounters any error, it can refer to the SPQ file to retrieve the lost data. However, if
the SPQ file data is removed immediately, the Replication Server will not be able to authenticate the validity of
data, and it results into erroneous or inadequate information. In order to prevent the immediate removal of
data in the SPQ file, you can use the spq_save_interval parameter in the alter connection command.
The spq_save_interval parameter allows you to configure the data retention time (in minutes). This value
indicates the amount of time that the system waits before removing or truncating an outdated data or data set.
Sample Code
Where:
12.4.4.2 spq_dump_queue
The spq_dump_queue command enhances the diagnostic ability of Replication Server in the HADR
environment. It provides the vital information that you can use to investigate the root cause of data loss in your
HADR environment. With this information, a remedial action can be taken if necessary.
The spq_dump_queue command interprets the raw binary data (stored in the SPQ file) into a more
comprehensible, user-consumable form. The command is used with an existing command (dump_file) to
dump the data into a file for debugging purposes. After you have specified the dump file location using the
dump_file command, the SPQ dump operation triggered by the spq_dump_queue command writes the
content to this specified file. You can use this output to identify the lost data, and then proceed with the
subsequent troubleshooting steps. Note that if you have not specified the file location before using the
spq_dump_queue command, the dump file is saved in the SPQ file directory itself.
Typically, the following commands should be executed to dump the SPQ queue:
Syntax
<filename>
specifies the SPQ file.
<number>
indicates the number of messages from the SPQ file that you want to include in the
data dump. If this value is not specified, the Replication Server selects up to the last
message in the SPQ file when the command is executed.
<begin_time>
indicates the begin timestamp. If you use this option, all the SPQ files that are updated
after the timestamp, are considered as the dumping target. It must be used with an end
timestamp that is specified by the end_time parameter. A special value NULL can be
used to indicate <anytime>.
<end_time>
indicates the end timestamp. If you use this option, all the SPQ files that are updated
before the timestamp are considered as the dumping target. It must be used with the
begin_time parameter. Similarly, NULL can be used to indicate <anytime>.
Examples
Example 1
Example 2
Dumps all the SPQ files that are updated before 11/07/15 08:00:00:
Example 3
Dumps the first 10 messages in the specific SPQ file called SPQ_2.dat:
Note
In all the examples, since a dump file is not specified, each dump is generated into a single random file
in the SPQ file directory.
Inserts a ticket to the SPQ. Use this command to flush a replication path when SAP ASE is down.
Syntax
sysadmin issue_ticket {, <dbid> |{, <ds>, <db>}}, <q_type>, h1[, h2[, h3[, h4 ]]]
Parameters
<dbid>
Specifies the database ID.
<ds>
Specifies the name of the primary data server for the SPQ.
<db>
Specifies the name of the primary database for the SPQ.
<q_type>
Specifies the type of queue. For SPQ, <q_type> must be 2. If <q_type> is 0, 1, or not
provided, the ticket is inserted into IBQ or OBQ.
Examples
Example 1
Forces the SPQ to send unconfirmed commands (NC-CMDs) to Capture. When SAP ASE is down, by using this
command, log transfer from the specified data server and database is suspended automatically, DSI to primary
database is suspended, and outbound queue for primary database is purged.
Syntax
Parameters
<ds>
Specifies the name of the primary data server for the SPQ.
<db>
Specifies the name of the primary database for the SPQ.
Caution
Purging messages from SPQ may result in data loss and should be used only with advice of SAP Technical
Support. You can run this command if you want to re-setup the replication path with rematerialization.
Syntax
Parameters
<dsname> The data server name of the component interface (CI) connection.
Example
Usage
Before you purge the SPQ, suspend the log transfer and capture, or make sure SAP Replication Server is in
hibernation mode.
Permissions
Requires sa permission.
The commands in this section enable you to manage the SPQ Agent, and are issued by the system.
To run any of these commands, simply execute the corresponding commands for Replication Management
Agent. When you do this, the system internally triggers SAP Replication Server commands when the
corresponding Replication Management Agent commands are executed.
Note
Syntax
Parameters
<dbname>
Specifies the database name of the external connection which the SPQ Agent uses for
the connect source command.
<external_rs_host:external_rs_port>
Specifies the connection information of the external Replication Server.
<ext_dsname>
Specifies the data server name of the external connection which the SPQ Agent will use
for the connect source command.
<active_dsname>
Specifies the current data server name used by the Replication Agent thread for SAP
ASE. It is retrieved by using the following command:
<spq_agent_user>
Specifies the user that SPQ Agent uses to connect to the external Replication Server.
<spq_agent_passwd>
Specifies the password that SPQ Agent uses to connect to the external Replication
Server.
<maintuser_name>
The maintenance user used by the external Replication Server. Transactions from the
maintenance user will be filtered by the external Replication Server.
Usage
Permissions
Syntax
Parameters
<dbname>
Name of the database of the external connection that the SPQ Agent uses for the
connect source command.
Usage
● Executing this command deletes all related rows in the tables rs_spqratgroup, rs_spqratstate, and
rs_spqratcfg, and SPQ can be truncated without considering external replication.
● The SPQ Agent must be stopped before this command is issued.
Alters the configuration value used by the SPQ Agent. Execute this command in Replication Servers within an
HADR system that are either on standby or in an active site.
Syntax
Parameters
<dbname>
Specifies the name of the database of the external connection that the SPQ Agent uses
for the connect source command.
<opname>
Specifies the name of configuration. The configurations that can be altered are:
connect_dataserver, connect_database, rs_servername, and
maint_username.
'<value>'
The value to set the configuration to.
Examples
Example 1
Example 2
Permissions
Starts the SPQ Agent thread for external replication. Issue it within any Replication Server inside HADR.
Syntax
Parameters
<dbname>
Specifies the name of the database of the external connection that the SPQ Agent uses
for the connect source command.
Usage
● If the current Replication Server does not control the active SPQ Agent, it uses the same credentials to log
in to the controlling Replication Server, and forwards this command. After you execute this command, the
SPQ Agent on the active member (specified in the rs_spqratstate.active_member table) of the group
starts.
● To execute this command, the SPQ Agent must be enabled, but not already started.
Permissions
Stops the SPQ Agent thread for external replication. Issue it within any Replication Server inside HADR.
Syntax
Parameters
<dbname>
Specifies the name of the database of the external connection that the SPQ Agent uses
for the connect source command.
<disable replication>
This option disables replication to an external replication system, stops the SPQ Agent
immediately, and disables the truncation point.
Usage
● If the current Replication Server does not control the active SPQ Agent, it uses the same credentials to log
in to the controlling Replication Server, and forwards this command. After you execute this command, the
state column in the rs_spqratstate table is set to 0x03, meaning enabled and suspended.
● If you specify the <disable replication> option, the truncation point of SPQ Agent is disabled from its
SPQ, so that you can truncate SPQ without considering external replication. Because this may cause you
to experience data loss, rematerialize when you resume the SPQ agent.
● A suspended SPQ Agent does not restart automatically after Replication Server restarts.
● This is a synchronized command, which means it will wait and return success after the SPQ Agent thread
finally exits, or returns immediately for any error that occurs.
● You do not see a success message from this command until the SPQ Agent thread completely exits. If any
errors occur, it displays a return immediately.
Permissions
Starts all local SPQ Agents from the same SAP ASE, and requires all other SPQ Agents for this database to be
in an inactive, or a draining state.
Syntax
Parameters
<servername>
The primary SAP ASE HADR where data is replicating from.
[with force]
Gives you a way to force switch when the previous active Replication Server is down and
the data in the previous SPQ can be discarded.
Usage
● If the SPQ Agent for the previous HADR active site is in a draining state, the local SPQ Agent is put in a
waiting state. It is then put in an active state when the previous agent has drained completely and become
inactive. As long as it has not been previously explicitly suspended, the SPQ Agent thread starts after all
other members are in an inactive state.
● The command returns with no change if one of local members is already in active or waiting state.
● If the local member is in an inactive state, it will sync up with the other Replication Server and experiences
one the following:
○ Enters into a waiting state if the current active member is in a draining state
○ Enters into a active state if all other members are in an inactive state
○ Fails if the current active member is in an active or a waiting state.
● If the local member is in a draining sate, the command returns with no change and tells the administrator
to issue the command again after the drain has completed.
● If the local SPQ Agent has not been previously explicitly suspended and if all other Replication Servers are
down, the agent starts immediately, and local SPQ Agents are moved into active state. If all other
Replication Servers are up, the command checks to see whether other members are in an inactive state
before it starts.
● The behavior is the same as if you do not specify with force if all the Replication Servers in the HADR
domain that are in an active, draining, or waiting state are up.
Permissions
Stops all local SPQ Agents either immediately, or after data is drained, putting the local SPQ Agents in an
inactive or drained state depending on the option used. This command plays a vital role in failover scenarios.
Syntax
Parameters
<servername>
The current, active HADR SAP ASE server where data is replicating from.
after drain
disable replication
Immediately stops all local SPQ Agents without draining data. Also disables the
truncation point for each SPQ Agent from its SPQ. This means that database re-
materialization is needed if the SPQ Agent is re-activated later.
Permissions
Replication Server provides commands to create a connection to an HADR database in an external Replication
Server, and to change the attributes of a database connection.
Syntax
Parameters
<data_server>
Specifies the data server that holds the database to be added to the replication system.
<database>
Specifies the database to be added to the replication system.
<error_class>
Specifies the error class that is to handle errors for the database.
<function_class>
Specifies the function string class to be used for operations in the database.
set username [to] <user>
Specifies the username that the external Replication Server uses to log onto the active
Replication Server, and to HADR-enabled SAP ASE, and which meets these
requirements:
Example 1
Example 2
Usage
● Connects to the ASE server as the maintenance user, and issues the following commands to obtain the
name of active Replication Server and the current data server name, with the results displaying in
<host>:<port> format:
● Connects to the active Replication Server as the maintenance user and issues following command to
enable SPQ Agent:
select rep_agent_config(…)
Table 33:
Syntax
<data_server>
Specifies the data server that holds the database to be added to the replication system.
<database>
Specifies the database to be added to the replication system.
for replicate table named [<table_owner>.]<table_name>
Specifies the name and owner of the table at the replicate database. <table_owner>
is an optional qualifier for the table name, representing the table owner. <table_name>
is the name of the table at the replicate database, and can be up to 200 characters
long.
set table_param [to] '<value>'
Specifies the table-level parameter that affects a table you specify with the for
replicate table name clause.
set function string class [to] <function_class>
Specifies the function string class to be used for operations in the database.
set error class [to] <error_class>
Specifies the error class that is to handle errors for the database.
set replication server error class [to] <rs_error_class>
Specifies the error class that handles Replication Server errors for a database. The
default is rs_repserver_error_class
set password [to] <passwd>
Use this clause to modify the password (if needed) for the maintenance user, which was
specified in the create connection command, by using the clauses set password
[to] <passwd>, and set username [to] <user>. Note that <user> (the
username assigned to the maintenance user) is not a changeable configuration in
alter connection.
set log transfer [to] {on | off}
Indicates that the connection may be a primary data source or the source of replicated
functions. When you specify this clause, RepAgent or SPQ Agent creates an inbound
queue and is prepared to accept a RepAgent connection. These commands are sent to
the primary Replication Server for distribution and replication. RepAgent also
coordinates database log truncation with the Adaptive Server and the primary
Replication Server.
set spq_agent_username [to] '<value>'
Specifies the username for the credential SPQ Agent uses to connect to an external
Replication Server to replicate data. Both this username and its password must already
exist in an external Replication Server with a “connect source” permission.
set spq_agent_password [to] '<value>'
Specifies the password for <spq_agent_username>.
set database_param [to] '<value>'
Allows you to specify a value that affects database connections from the Replication
Server.
Note
If you do not specify '<value>', the empty string disables ExpressConnect tracing
values after the connection, or after Replication Server restarts. For example:
Example 1
Example 2
Replication Agent Thread for SAP ASE (RepAgent for short) provides some commands and parameters that
you can use in an HADR system.
This topic lists the RepAgent configuration parameters that apply to stream replication only. For more other
RepAgent configuration parameters, refer to SAP Replication Server Reference Manual.
'buffer pool size', {'<buffer pool size Specifies the maximum size of the buffer pool. Specifies the
value>'} number of buffers (packages) stream replication can allocate
on startup.
Value: 1 to 2,147,483,647
Default: 8
'initial log scan percent', {'<initial A dynamic configuration parameter, this specifies the per
log scan percent value>'} centage of the initial log that RepAgent should scan before it
evaluates whether to slow down user tasks. This also applies
to every multiple of the percentage until RepAgent scans all
the initial log.
Value: 1 to 100
Default: 100
'max commands per package', {'<max Specifies the maximum number of commands that can be
commands per package value>'} put in a stream replication package.
Value: 1 to 1,000,000
Default: 80
'max commit wait', {'<max commit wait Specifies the maximum amount of time in microseconds
value>'} that a user task committing a transaction waits for acknowl
edgment of the commit from SAP Replication Server. When
the commit time expires because the user task did not re
ceive acknowledgment, the task makes additional calcula
tions to determine if it needs to request RepAgent to switch
to asynchronous replication mode to allow the application to
proceed. The additional logic is based on other configuration
parameters. See peak transaction threshold and
peak transaction timer for more information.
Note
If the value is set to zero, all user tasks committing trans
actions wait indefinitely for acknowledgment.
Specifying a:
'max stream retry', {'<max stream retry Specifies how often RepAgent retries to set up a connection.
value>'} RepAgent shuts down once the configured value is reached.
Value: -1 or 2,147,483,647
'max user task slowdown', {'<max user A dynamic configuration parameter, this specifies the maxi
task slowdown value>'} mum amount of time in milliseconds for the slowdown im
posed on user tasks for replication to switch to sync mode at
RepAgent startup.
'replicate admin commands', {<replicate Specifies whether or not to replicate the update
admin commands value>'} statistics and delete statistics commands.
Default: false
'stream buffer size', {'<stream buffer Specifies the size of a stream replication buffer (package) in
size value>'} bytes. Each stream replication package in the stream replica
tion buffer pool shares the same size specified by stream
buffer size.
Value: 1 to 2,147,483,647
'stream mode', {'<stream mode value>'} Specifies the replication synchronization mode between
RepAgent and SAP Replication Server. The database must be
configured for stream replication for this option to take ef
fect.
Default: sync.
Default: false
'peak transaction threshold'[, Specifies the maximum number that a global counter can
'<peak_transaction_threshold>'] reach before RepAgent switches from synchronous or near-
synchronous mode to asynchronous mode. The global coun
ter increases by one when a task has an average commit wait
time that is greater than the configured maximum commit
wait time. When the global counter reaches the specified
peak transaction threshold, the task requests RepAgent to
switch the stream mode.
Default: 5
'peak transaction timer'[, Sets the amount of time, in seconds, that RepAgent waits be
'<peak_transaction_timer>'] fore resetting the global counter to zero. The global counter
records the number of times when a task commit wait time
exceeds the configured maximum commit wait time. The
timer restarts again after the global counter is reset to zero.
Use the peak transaction timer to avoid a mode switch
caused by accumulated spikes in the average commit wait
time during a long period of time.
When RepAgent is configured for stream replication to support synchronous replication in an HADR system,
the output for the process parameter in sp_help_rep_agent shows additional status information about the
Coordinator, Scanner, and Secondary Truncation Point Manager processes. This output differs from the output
when the replication mode is through log transfer language (LTL).
However, when RepAgent is configured for stream replication, the output for other parameters in
sp_help_rep_agent matches the output when the replication mode is through log transfer language (LTL).
Refer to the output for other parameters in sp_help_rep_agent in the SAP Replication Server Reference
Manual.
Table 34: Column Descriptions for Output from sp_help_rep_agent with 'process' during Stream Replication
Column Description
<dbname> The name of the database for which you are querying process information.
<pathname> The name of the replication path associated with each sender or scanner process if you
configure multiple replication paths and scanners (multi-path Replication only).
<spid> The system process ID of a process in the dataserver. For a Multithreaded RepAgent –
spid identifies the coordinator task if you enable multiple scanners.
<sender_spid> The system process ID of each sender process in the dataserver (not applicable for
Stream Replication).
<start marker> Identifies the first log record scanned in current batch.
<end marker>
Identifies the last log record to be scanned in current batch.
<current marker>
Identifies the current log record being scanned.
<trunc_pts_confirmed> The number of confirmed truncation points. A confirmed locater is received for repli
cated log operations that were actually written to disk.
<trunc_pts_processed> The number of truncation points processed. For example, the number of times the Sec
ondary Truncation Point could be moved in the primary database.
<sleep status> See the <sleep_status> and <state> columns together for the status of the coordina
tor, scanner, and secondary truncation point manager.
<state>
● RepAgent Coordinator Process Status During Stream Replication [page 629]
● RepAgent Scanner Process Status During Stream Replication [page 630]
● RepAgent Secondary Truncation Point Manager Process Status During Stream Rep
lication [page 630]
scanner_type Indicates the type of scanner. RepAgent supports the following two types of scanners
when you enable in-memory row storage (IMRS) on a database.
● syslogs_scanner
● sysimrslogs_scanner – not available for non-IMRS databases.
When stream replication mode is enabled, you need to look at values for both the columns, <sleep_status>
and <state>, to determine the thread and process status of the RepAgent Coordinator, Scanner, and
Secondary Truncation Point Manager. Some of the statuses are tagged "rare" because they may appear very
briefly as the process task moves from one common state to another and are therefore extremely unlikely to be
seen.
stopping sleep on task terminate Coordinator is waiting for scanner and Secondary
Truncation Point Manager to stop.
Table 37: RepAgent Secondary Truncation Point Manager Process Status During Stream Replication
process trunc. point not sleeping Secondary Truncation Point Manager is process
ing a new truncation point.
Specifies the timeout value for the message channel from RepAgent to SAP Replication Server.
Syntax
Parameters
<stream_rep_msg_channel_timeout> The maximum time, in seconds, that RepAgent waits for the
response from SAP Replication Server.
Examples
Examples 1
This example sets the RepAgent message channel timeout to 100 seconds:
Usage
● Use sp_configure 'Rep Agent Thread administration' to check the current timeout value.
● The default timeout value is 60 seconds. Valid range is 0 - MAXINT. If you set it to 0, it means no timeout.
Anyone can execute sp_configure to display information about parameters and their values.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities,
genders, and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.