Oracle To SQL Migration FAQ - v7.0
Oracle To SQL Migration FAQ - v7.0
0 May 2019
Summary
You are currently running an SAP system on a Unix, Windows or Linux operating system and Oracle, Informix, DB2,
Sybase, HANA or MaxDB database and wish to migrate your SAP system to Microsoft SQL Server.
You may also wish to convert your SAP system to Unicode during the migration to SQL Server. Much of the content
in this whitepaper is also applicable for migrations to another database
Background Information
SAP & Microsoft have extended the capabilities of the SAP OS/DB migration tools and procedures to simplify the
process of migrating SAP systems to SQL Server. This note contains the latest information regarding the technical
capabilities and features for OS/DB Migrations where the target database is SQL Server.
Please review the latest blogs at: http://aka.ms/saponazureblog
SAP DMO, NZDT and other Minimized Downtime Options for Moving to Azure
SAP provides a number of enhanced tools and procedures for ultra-low migrations and/or upgrades.
Additional information can be found in 693168 – Minimized Downtime Service (MDS)
This note is the starting point for customers with VLDB systems and downtime requirements that are very low.
Services such as “Near-Zero Downtime Technology for SAP S/4HANA Conversion with Repeatable Delta Conversion”
are SAP Consulting Services projects and typically have a significant cost.
Additional information about the use of DMO for SAP on Azure projects can be found here:
https://azure.microsoft.com/en-us/resources/migration-methodologies-for-sap-on-azure/
Solution
The link https://wiki.scn.sap.com/wiki/display/SL/System+Copy+and+Migration contains more information on the
OS/DB Migration process. Also review note 82478.
Customers should target conversion throughput of around 1-2TB per hour using all the enhancements contained in
this document.
RECOMMENDATIONS
1. Required patch levels for Migration Tools, Windows & SQL Server
You must use these patch levels or higher for the following components. It is generally recommended to use the
most recent version of these components.
R3LOAD
Basis Release R3load Kernel
7.5x 753 latest release (S4 customers can use 7.73 kernels)
7.4x 753 latest release
7.3x Please use 722 EXT latest release
7.1x Please use 722 EXT latest release
7.0x Please use 722 EXT latest release
DBSL
Basis Release R3load Kernel
7.5x 753 latest release (S4 customers can use 7.73 kernels)
7.4x 753 latest release
7.3x Please use 722 EXT latest release
7.1x Please use 722 EXT latest release
7.0x Please use 722 EXT latest release
MIGMON
Java based Migration Monitor is downward compatible 7.4x, 7.3x, 7.1x, 7.0x, 6.40, 4.6C and lower. Use the most
recent version. To download Migmon check OSS Note 784118
1|Page
R3TA
R3TA Table Splitter is only available for Kernels 6.40 and higher. Use the most recent version. Review Note
1650246 - R3ta: new split method for MSSQLand Note 1784491 - R3ta: Split of physical Clustertables
SQL Server Enterprise Edition x64 - download and install the latest service pack and CU. Refer to Note 62988
Service packs for Microsoft SQL Server. This link is useful to find the latest SP or CU for SQL Server
http://blogs.msdn.com/b/sqlreleaseservices/
Do not to use 32bit versions of Windows or SQL. If your system is 4.6C based run 4.6C on 64 bit Windows 2003
and 64 bit SQL 2005.
2. Hardware Configurations
Review SAP Note 1612283 - Hardware Configuration Standards and Guidance. Follow the guidance in this note.
Do not under specify memory. 384GB is the minimum for new SAP server deployments. Customers with 1-3TB
of RAM are now mainstream.
It is strongly recommended to utilize FusionIO cards (or similar) for larger OS/DB Migrations.
DB Server:
Use 2 socket server as above or 4 Processor Intel Skylake between 8-28 cores per processor 1-4TB RAM 10GB
Network card. Cost = $33,000-56,000 list price* SAPS = ~300,000-350,000
*Source www.dell.com
3. Unsorted Export
An unsorted export is supported and may be imported into a SQL Server database. A sorted export will take much
longer to export and is only marginally faster to import into SQL Server. Unicode Conversion customers must
export certain cluster tables in sorted mode. This is to allow R3LOAD to read an entire logical cluster record,
decompress the entire record (which may be spread over multiple database records) and convert it to Unicode.
See Note 954268, 1040674 and 1066404. The content of OSS Note 1054852 has been updated
Our default recommendation is to export unsorted as in most cases the UNIX/Oracle or DB2 server has only a
fraction of the CPU, IO and RAM capacity of a modern Intel commodity server. Even though there is an overhead
involved in inserting rows into the clustered index on SQL Server, this overhead is relatively small.
4. Table Splitting
A table split export is fully supported and may be imported into a SQL Server database. Table split packages for
the same table may be imported concurrently.
Customers have successfully split large tables into a maximum of 20-80 splits and achieved satisfactory results on
tables that have poor import or export throughput. It is recommended to use a minimum amount of splits possible
especially if deadlocks during imports are observed.
There are some tables that we always recommend splitting due to slow export or import performance:
CDCLS, S033, TST03, GLPCA, STXL, CKIT, REPOSRC, APQD, REPOTEXT, INDTEXT
2|Page
After generating WHR files with R3TA the WHR splitter must be run to create split packages. Always set the
whereLimit parameter to 1, meaning 1 package for each where clause.
5. Package Splitting
The Java based Package Splitting tool is fully supported in all cases. It is recommended not to use the Perl based
splitter.
This command will generate the TPL files and the default STR files (without the EXT files)
r3ldctl –l logfilename –p D:\exportdirectory
Note: Exports to SQL Server do not need Extent files and the whole Extent file (*.EXT) file generation process can
be skipped to save time. Instead it is recommended to use the following script to determine the largest tables in
the Oracle database:
spool tablefile.txt
set lines 100 pages 200
col Table format a40
col Owner format a10
col MB format 999,999,999
select owner "Owner", segment_name "Table", bytes/1024/1024 "MB" from dba_segments where bytes >
100*1024*1024 and segment_type like 'TAB%' order by owner asc, bytes asc
spool off;
Then it is recommended to extract the largest tables (possibly anything more than ~2GB) into their own packages
(and also table split if required). The following command can be used. Please note that when using SWPM EXT
files are required. EXT files can be bypassed only when doing a manual Migmon based migration
6. FASTLOAD
All SAP data types can now be loaded in Bulk Copy mode. It is recommended to set the –loadprocedure fast
option for all imports to SQL Server. These are the default settings for SAPInst. If migration monitor is used this
parameter must be specified. Please also note that to support FastLoad on LOB columns set environment
variable BCP_LOB=1 and review note 1156361
The parameters we recommend for Migmon or SAPInst are loadArgs=-stop_on_error -merge_bck -loadprocedure
fast
The script below shows the actual status of the SAP Export using SAP MigrationMonitor log files.
The script reloads every 20 seconds and displays
- actual CPU Load
- Actual running Packages
- Actual waiting Packages
MigMonStatus.zip
3|Page
8. Package Order by recommendations
It is recommended to use an OrderBy.txt text file to optimize the export of an Oracle system and the import to
SQL. By default a system will export packages in alphabetical order and import packages in size order.
The OrderBy.txt can be used to instruct Migration Monitor to start packages in a specific order. Normally the best
order is to start the longest running packages first. It is recommended to perform an export on a test system to
determine which tables are likely to run longest.
Note: It is normal for the export and import runtimes of a package to be very different. Some packages may be
very slow to export yet very fast to import and vice-versa.
SAP have released SAP Note 1043380 which contains a script that converts the WHERE clause in a WHR file to
a ROW ID value. Alternatively the latest versions of SAPInst will automatically generate ROW ID split WHR files if
SWPM is configured for Oracle to Oracle R3LOAD migration. The STR and WHR files generated by SWPM are
independent of OS/DB (as are all aspects of the OS/DB migration process).
The OSS note contains the statement “ROWID table splitting CANNOT be used if the target database is a non
Oracle database”. Customers wishing to speed up an export from Oracle may send an OSS message to BC-DB-
ORA and request clarification of this restriction. Technically the R3LOAD dump files are completely independent
of database and operating system. There is one restriction however, restart of a package during import is not
possible on SQL Server. In this scenario the entire table will need to be dropped and all packages for the table
restarted. ROW ID has a disadvantage that calculation of the splits must be done during downtime – see
1043380.
OS/DB Migrations larger than 1-2TB will benefit from separating the R3LOAD export processes from the Oracle
database server.
Note: Windows application servers can be used as R3LOAD export servers even for Unix or mainframe based
database servers. Intel based server have far superior performance in SAPS/core than most Unix servers,
therefore R3LOAD will run much faster on Intel servers with a high clock speed.
The simplest way to allow Windows R3LOAD to logon to Unix Oracle server is to change the SAP<SID> on
schema systems or sapr3 on non-schema systems to “sapr3” without quotes. This password is hardcoded into
R3LOAD. If the password cannot be changed then the user account on the R3LOAD Windows server (normally
DOMAIN\<sid>ADM) will need to be added to the SAPUSER table OPS$<DOMAIN>\<SAPSID>ADM
The SQL Server database should be manually extended so that the SQL Server automatic file growth mechanism
is not used as it will slow the import. The transaction log file should be increased to ~500+GB for larger systems.
Migrating 10TB+ systems need around 1-3TB of Transaction Log.
Max Degree of Parallelism should be set to 1 usually. Due to the logic for parallelizing index REBUILD or
CREATE statements it is highly likely that most index creation on SAP systems will be single threaded irrespective
of what MAXDOP is specified. Some indexes may benefit from MAXDOP of 4. Do not set MAXDOP to 0
To activate minimized logging start SQL Server with Trace Flag 610. See SAP Note 1482275
If R3LOAD or SQL Server aborts during the import, you need to drop all the tables which were in process at that
time. The reason is that there is a small time window where data should be written to disk in a synchronous
manner, but the writes are asynchronous. Therefore the consistency of the table cannot be guaranteed and the
table should be dropped and the import restarted.
In general we recommend 610, 1118 and 1117. To display trace flags run DBCC tracestatus
Remove trace flag 610 after the migration.
4|Page
e. Set the system environment variables MSSQL_DBNAME=<SID>, MSSQL_SCHEMA=<sid>,
MSSQL_SERVER=<hostname> (or MSSQL_SERVER=<hostname>\<inst> named instance) and
dbms_type=mss
f. If the database logins are required please manually create the users Domain\<sid>adm and
Domain\SAPService<SID> and then use the script attached to Note 1294762 - SCHEMA4SAP.VBS
g. Logon as Domain\<sid>adm and run R3LOAD –testconnect
a. Install the full 10g/11g/12c x64 client for Windows – not just the SAP client. It is easiest to work with the
full client.
b. Download the Oracle R3LOAD and DBSL – unzip and place in a directory such as
C:\Export\Oracle\Kernel
c. Set the follow Environment variables (it might be useful to make a small batch file for this):
SET DBMS_TYPE=ora
SET dbs_ora_schema=SAPR3 or <SID>SAP for schema systems
SET dbs_ora_tnsname=<SID>
SET NLS_LANG=AMERICAN_AMERICA.WE8DEC (or UTF8 if Unicode)
SET ORACLE_HOME=D:\oracle
SET ORACLE_SID=<SID>
SET SAPDATA_HOME= D:\Export\Oracle\Kernel
SET SAPEXE=D:\Export\Oracle\Kernel
SET SAPLOCALHOST=<set to local hostname>
SET SAPSYSTEMNAME=<SID>
SET TNS_ADMIN= D:\oracle\....ora home..\network\admin
d. Edit the SQLNET.ORA and TNSNAMES.ORA to resemble the below
################
# Filename......: sqlnet.ora
# Created.......: created by SAP AG, R/3 Rel. >= 6.10
# Name..........:
# Date..........:
# @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/SQLNET.ORA#4 $
################
AUTOMATIC_IPC = ON
TRACE_LEVEL_CLIENT = OFF
NAMES.DEFAULT_DOMAIN = WORLD
SQLNET.EXPIRE_TIME = 10
SQLNET.AUTHENTICATION_SERVICES = (NTS)
DEFAULT_SDU_SIZE=32768
################
# Filename......: tnsnames.ora
# Created.......: created by SAP AG, R/3 Rel. >= 6.10
# @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
################
<SID>.WORLD=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = SAP.WORLD)
(PROTOCOL = TCP)
(HOST = <hostname goes here>)
(PORT = 1527) or can be 1521 – check each system
)
)
(CONNECT_DATA =
(SID = <SID>)
(GLOBAL_NAME = <SID>.WORLD)
)
)
e. Edit the hosts file on the UNIX server and enter the Windows R3LOAD server ip address and hostname.
On the Windows server edit the hosts file and enter the UNIX server ip address and hostname. Test with
PING
5|Page
f. Test the Oracle connectivity with TNSPING <SID>.WORLD.
g. Run the script attached to SAP Notes 50088 and 361641 (userdomain will usually be the local hostname
of the R3LOAD server if the server is not a domain member). This script will create the OPS$ users that
are needed for SAP to login to Oracle. : sqlplus /NOLOG @oradbusr.sql SCHEMAOWNER UNIX
SAP_SID x (The reason for using the UNIX script is that Oracle on UNIX cannot “see” the hostname of
the Windows server)
h. Try logging into the Oracle database from the Windows server with the following syntax (for schema
systems replace SAPR3 with <SID>SAP) : sqlplus SAPR3/sap@<SID>.WORLD
i. To ensure correct authorizations try running SELECT * FROM T000;
j. Try running R3LOAD –testconnect (remember to set the environment first)
For DB2 databases it is recommended to set these environment variables and then run the DB2 client installer
DB2CLIINIPATH=C:\export\client
DB2DBDFT=<SID>
DB2INSTANCE=db2<sid>
DBMS_TYPE=db6
DBS_DB6_SCHEMA=sap<sid>
DBS_DB6_USER=sap<sid>
DSCDB6HOME=<db server name>
EXPORT_DIR=C:\export
JAVA_HOME=C:\export\sapjvm_8
rsdb_ssfs_connect=0
SAPSYSTEMNAME=<SID>
It is further recommended to configure Jumbo Frames on both the R3LOAD server and the Database Server both
during the export and import. Note that the Jumbo Frame size must be configured identically on the Database
Server, the Switch ports used by both the DB and R3LOAD Server and the NIC card on the R3LOAD Server. The
normal value for Jumbo Frames is 9000 or 9014, though some network devices may only allow 9004. It is
essential that this value is the same (or higher) on all devices or conversions will occur.
If high kernel times are seen on specific Logical Processors in Task Manager check RSS options on the NIC
cards. Windows 2008 and higher allows for RSS Ring configuration usually up to 8 CPUs on 1Gbps NIC and up
to 16 on 10Gbps cards. Perfmon can be used to monitor “Queued DPC” per CPU. This will indicate how many
CPUs are being used for Network DPC traffic and how many RSS Rings are configurable. RSS Ring
configuration can be changed under the Advance Network Properties for most NIC drivers. RSS does not function
well in combination with 3rd party network teaming software. It is recommended to use Windows Server 2016
which has built in network teaming. https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-
the/Network-Settings-Network-Teaming-Receive-Side-Scaling-RSS/ba-p/367195
On Azure it is not possible to configure jumbo frames. Instead Azure Accelerated networking feature and
Proximity Placement Groups should be used if possible
In some cases the network traffic generated from an import will be so great network errors may cause R3LOAD to
fail. If this occurs please review Microsoft KB899599.
6|Page
13. Disabling or Deleting Secondary Indexes
Disabling secondary indexes can be done and certain long running indexes built online after the system is
restarted and validated. To do this remove the Index definition from the STR structure file. After the system is
restarted 10-20 indexes can be built online simultaneously. It is recommended to start the ONLINE index build
phase prior to users logging onto the system. If using SQL 2016 or later start the index build with low priority lock
14. Hyperthreading
It is recommended to use Hyperthreading on all Intel processors. In very rare cases Hyperthreading can be
disabled in the server BIOS. Review Note 1612283 - Hardware Configuration Standards and Guidance
In such an export procedure R3LOAD will communicate directly with the R3LOAD process on the target server.
No dump files will be created as all data is passed via TCPIP. A socket export/import reduces the R3LOAD CPU
consumption and may allow slow legacy servers to run a larger total number of R3LOAD processes.
It is not possible to use TCPIP Port based migration procedure when converting from non-Unicode to Unicode.
It is possible to migrate a Unicode SAP system running on an Oracle database to a Unicode SAP system running
on SQL Server (even if the source system is running on a big Endian 4102 platform and SQL Server is on a little
Endian 4103 platform)
Note: a socket export the OrderBy parameter on the import server must not be set or the import will crash with a
Java error (import order is set by the export server).
a. Ensure the full SAP support stack is reasonably up to date (capable of supporting SQL2016 or SQL2017)
b. Implement OSS Note 2681245 - Correction Collection for SAP BW on SQL Server – this code is safe to
apply to Oracle or DB2 systems. The code will never be executed on DBMS other than SQL Server.
c. Apply any OSS Notes for SMIGR_CREATE_DDL listed in Note 888210
d. Run SMIGR_CREATE_DDL with option “SQL Server 2016 (all column-store)” option selected
e. Export the database
f. Import the database
g. Run RS_BW_POSTMIGRATION with the default selection for a Heterogenous migration
The default outcome is to automatically convert all F Fact and E Fact cubes to Column Store. If a cube(s) are not
converted to column store open a support message in queue BW-SYS-DB-MSS
One other SAP components it may be possible to only update the SAP_BASIS support pack to allow the use of
the most recent SQL Server version. On BW systems this is not possible and the entire Support Pack Stack must
be upgraded to support a specific version of SQL Server.
It is recommended to review:
Recent SAP BW improvements for SQL Server
Improved SAP compression tool MSSCOMPRESS
Improvements of SAP (BW) System Copy
Modern versions of SQL Server support up to 15,000 table partitions. It is still recommended to check for objects
with many partitions on the source and target systems. Migrations to SQL Server will be re-partitioned even if the
source system is not partitioned 1471910 - SQL Server Partitioning in System Copies and DB Migrations
7|Page
The number of partitions on SAP BW systems might be different on the source and target systems depending on
some factors. More information on partitioning on BW systems can be found here:
https://blogs.msdn.microsoft.com/saponsqlserver/2013/03/19/optimizing-bw-query-performance/
In general it is recommended to keep the number of partitions below around 500. A typical approach is to do “BW
Compression” on F Fact tables after the data has been validated for 2-6 weeks
To check partition count in before and after migrating a SAP BW system there are several options:
1. Use report MSSCOMPRESS on the target system and copy the results into Excel and sort
2. Run the statement below
select COUNT(partition_id),object_name(object_id),index_id
from sys.partitions
where OBJECTPROPERTY(object_id,'IsUserTable')=1
group by object_id, index_id
order by 2,3 asc
To check the compression properties of a particular table run the following in SQL Management Studio
select OBJECT_NAME(object_id), index_id, data_compression, data_compression_desc
from sys.partitions where object_id = OBJECT_ID('<TABLENAME>');
20. Oracle or DB2 ABAP Hints or EXEC SQL – How to handle these
In general we have found that the SQL Server Optimizer does not require as many hints as Oracle. Therefore it is
our standard recommendation to ignore Oracle or DB2 hints on SQL Server. Only if a specific performance
problem is identified should a SQL Server ABAP hint be added. This applies to both SAP standard and custom Z
ABAP. We strongly recommend against manually converting all Oracle ABAP hints into their SQL Server form.
This is time consuming and unnecessary. SAP provide a small report to scan ABAP to detect hints and EXEC
SQL - Report RS_ABAP_SOURCE_SCAN
Review https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/How-to-integrate-SQL-Server-
specific-hints-in-ABAP/ba-p/367138
One simple way to do this is to run all the preparation steps such as table splitting on the UNIX server and then
copy the export directory with the STR, WHR and other required files to a Windows Intel Server. Then manually
run migmon. SWPM/SAPInst will give an option during the system copy to “Manually start Migmon”
However if there is no choice other than to run r3load on the UNIX server then follow the procedure below:
1. Download the latest SL Toolset https://service.sap.com/sltoolset (SWPM)
8|Page
2. Logon to the Database server (not supported on application servers) and run ./sapinst –nogui as root
3. On a Windows server run sapinstgui.exe and connect to the UNIX server on SWPM port
4. Export system using the SAPinst GUI
5. FTP dump files to Windows server and import
Review Note 1680045 – some old operating systems are no longer supported
This link may be useful when for vi and for setting UNIX environment variables such as JAVA_HOME
23. SAP 4.7, ECC 5.0 on Windows 2008 R2 or Windows Server 2012 (R2)
SAP only support Basis 7.0 or higher components on Windows 2008 R2, however it is possible to migration from
UNIX/Oracle to Windows 2008 R2 and SQL Server on older releases provided an upgrade is immediately
performed.
a. ERROR: ExeFastLoad: rc = 2
Please review SAP Note 942540. It is probable that the DFACT.SQL file has not been generated by the
SMIGR_CREATE_DDL report or the file is not in the <export dir>\DB\MSS directory. If the problem continues
try setting the NO_BCP=1 to stop FASTLOAD. This will allow R3LOAD to output a more specific error
message. Also check the SQL Server Error Log.
c. Dump on Logon Screen makes it impossible to logon: DYNPRO_ITAB_ERROR See Note 1287210
f. 4.6C error in dev_w* - Long Datatype Conversion not performed” please see Note 126973 - SICK messages
with MS SQL Server
g. R3SETUP and possibly very old SAPInst may attempt to create a SAP database with code page 850BIN prior
to the import of the dump files. Note 799058 and 600027 strictly forbid the use of code page 850BIN and
require conversion to 850BIN2.
Also note that the utility for converting codepage 850BIN to 850BIN2 does not work on SQL 2005 or higher
(the fast conversion feature was dropped from SQL 2005). Therefore care should be taken to avoid the case
where R3SETUP creates a 850BIN database on SQL 2005 and then MIGMON is used to import the system
into this database. Clearly this will result in an unsupported system running code page 850BIN on SQL 2005.
Conversion will be impossible and the import will need to be repeated after dropping and then manually
creating the database.
The following commands display the server (default) and database collations:
SELECT SERVERPROPERTY('Collation')
SQL_Latin1_General_CP850_BIN2
SELECT DATABASEPROPERTYEX('<SID>', 'Collation')
SQL_Latin1_General_CP850_BIN2
An incorrect code page will sometime product import errors with “ERROR: DbSlEndModify failed rc = 26”
This is because the limit on the number of parameters on a stored procedure is 2100 on SQL. It is higher on
other databases
http://technet.microsoft.com/en-us/library/ms191132.aspx
It is possible to change queries with > 2090 parameters to “literal” queries. Review SAP Note 1552952
i. In very rare cases a JOIN on Oracle may not work on SQL Server. This can happen on systems such as
CRM where GUIDs are stored in RAW datatypes and a JOIN is attempted on a CHAR datatype. Please
review Note 1294101
j. A simple and easy way to suspend and release all batch jobs on a system is to run these reports in SE38
Suspend: BTCTRNS1
Release: BTCTRNS2
update sapr3.tbtco set status = 'P' where jobname not like 'EU%' and jobname not
like 'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%'
and status = 'S'
delete from sapr3.tbtcs where jobname not like 'EU%' and jobname not like
'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%'
SQL statement that includes the Jobs for EarlyWatch-Alert (if system is just being moved):
update sapr3.tbtco set status = 'P' where jobname not like 'EU%' and jobname not
like 'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%'
and jobname not like 'SCUI%' and jobname not like 'AUTO_SESSION_MANAGER' and
status = 'S'
10 | P a g e
delete from sapr3.tbtcs where jobname not like 'EU%' and jobname not like
'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%' and
jobname not like 'SCUI%' and jobname not like 'AUTO_SESSION_MANAGER'
k. These command will purge old UNIX host profile parameters. Import new profiles with RZ10.
Do not migrate UNIX style profile parameters to Win/SQL. Use zero memory management and keep the
default parameters in general.
e. UNIX and Windows CR 0x0D – carriage return formatting is different. SAP Note 27 (not a mistake, note 27)
contains the profile parameter abap/NTfmode. Also see 788907
f. Copying a file in UNIX is possible but Locked in Windows. If the ABAP command OPEN DATASET is used to
open a file on UNIX OS it is still possible to copy this file. On Windows a lock on the file will be held. It is
required (and best practice) to ensure a CLOSE DATASET ABAP command is issued before manipulating a
file external to the ABAP server
g. A large number of R3LOAD processes are configured and Oracle issues this error
Solution:
Increase the parameters in
unix: $ORACLE_HOME/dbs/init<DBSID>.ora
windows:$ORACLE_HOME/database/<initDBSID>.ora
PROCESSES=1000
SESSIONS=1105
h. Sorting some BW or other large tables can consume massive amounts of PSAPTEMP. If this occurs there
are two options: (1) switch to Unsorted export (see earlier section in this document or (2) run the commands
below to increase PSAPTEMP
11 | P a g e
ORA-01652: unable to extend temp segment by 128 in tablespace PSAPTEMP
(DB) INFO: disconnected from DB
Sqlplus /nolog
Connect / as sysdba
SQL> ALTER TABLESPACE PSAPTEMP ADD TEMPFILE 'E:\oracle\BWP\sapdata1\temp_1\TEMP.DATA2'
SIZE 20000M
SELECT * FROM V$TEMP_SPACE_HEADER;
i. FASTLOAD Errors
The system environment variable NO_BCP=1 will override the –loadprocedure –fast option and force
R3LOAD to use the normal DBSL interface for import
k. To Enable fastload on LOB columns in 6.40 & 7.00 set BCP_LOB=1 and review note 1156361
n. If the following error is seen read OSS Note 1721059. Atomic Bind on SQL 2012
(DB) ERROR: DDL statement failed
(INSERT INTO @XSQL VALUES (' sap_atomic_defaultbind 0, '/BI0/E0BWTC_C02',
'KEY_0BWTC_C02P' ') )
DbSlExecute: rc = 103
(SQL error 2812)
o. Logon or other License profiles implement this note in transaction SECSTORE 1532825
p. MaxDB Migrations using a Windows R3LOAD server require that the appropriate security is in place to allow
connection to MaxDB. See SAP Note 39439 - XUSER entries for SAP DB and MaxDB Syntax should
look similar to this: xuser -U w -u <SID>ADM,<password> -d <SID> -n <maxdbhost> -S SAPR3 set
q. Below is a useful script to run if an Import fails and the entire SAP database needs to be purged of all tables.
Thanks to Amit for providing this. WARNING: Running this script will drop all tables in the current database
Use <SID>;
EXEC sp_MSforeachtable 'drop table ?';
r. Towards the end of an import there may be many “suspended” SQL processes. These can be viewed with
SQL Management Studio Activity Monitor. Clicking on the suspended process may show that a process is
performing a CREATE INDEX. Towards the end of an import most of the table data import is complete and
SQL Server will be building secondary indexes. The primary clustered index has been built simultaneously as
the table data is loaded. Often these secondary indexes are non-standard Z indexes or sometimes unused
SAP standard indexes. These indexes may be deleted in the source system before export or created after the
system has been restarted and the downtime period is over. SQL 2005 and higher supports online index
creation.
The memory consumption during index creation can be substantial, especially if many indexes are being built
simultaneously. This script is useful to detect situations when SQL is suspending index creation due to
insufficient memory
12 | P a g e
-- current memory grants per query/session
select
session_id, request_time, grant_time ,
requested_memory_kb / ( 1024.0 * 1024 ) as requested_memory_gb ,
granted_memory_kb / ( 1024.0 * 1024 ) as granted_memory_gb ,
used_memory_kb / ( 1024.0 * 1024 ) as used_memory_gb ,
st.text
from
sys.dm_exec_query_memory_grants g cross apply
sys.dm_exec_sql_text(sql_handle) as st
-- uncomment the where conditions as needed
-- where grant_time is not null -- these sessions are using memory
allocations
-- where grant_time is null -- these sessions are waiting for memory
allocations
If many R3LOAD BCP or CREATE INDEX Processes are in status SUSPENDED with
RESOURCE_SEMAPHORE wait type in the DMV below:
If this is the case, it may be useful to cap the amount of memory that a particular secondary index build task
can consume. This will force the Secondary Index Build to use TEMPDB. The way to cap memory is to
Active Resource Governor (by right clicking on it in SSMS). Adjust the memory percentage value as needed.
By default SQL Server can easily consume 10-40GB RAM per Index Build if no limit is set – the actual value
depends on the amount of RAM in the server. This substantially improves Index build speed, however if too
many secondary indexes are built at one time this will consume all available memory, thereby blocking other
resources. It is recommended to monitor TempDB utilization when setting this option
USE master;
BEGIN TRAN;
-- Create 1 workload group for SAP R3Load
-- Workload group is getting assigned to default pool automatically
CREATE WORKLOAD GROUP R3load;
GO
COMMIT TRAN;
go
-- Create a classification function.
CREATE FUNCTION dbo.classify_r3load() RETURNS sysname
WITH SCHEMABINDING AS
BEGIN
DECLARE @grp_name sysname
IF (APP_NAME() LIKE 'R3 00%')
SET @grp_name = 'R3load'
RETURN @grp_name
END;
GO
-- Register the classifier function with Resource Governor
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.classify_r3load);
GO
--change maximum memory grant a query can get. Default = 25%
ALTER WORKLOAD GROUP R3load with (REQUEST_MAX_MEMORY_GRANT_PERCENT=5);
go
-- Start Resource Governor
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
USE master;
BEGIN TRAN;
-- Create 1 workload group for SAP SQLCMD
-- Workload group is getting assigned to default pool automatically
CREATE WORKLOAD GROUP SQLCMD;
GO
COMMIT TRAN;
go
-- Create a classification function.
CREATE FUNCTION dbo.classify_ SQLCMD () RETURNS sysname
WITH SCHEMABINDING AS
BEGIN
DECLARE @grp_name sysname
IF (APP_NAME() LIKE SQLCMD ')
SET @grp_name = ' SQLCMD '
RETURN @grp_name
END;
GO
-- Register the classifier function with Resource Governor
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.classify_ SQLCMD);
GO
--change maximum memory grant a query can get. Default = 25%
ALTER WORKLOAD GROUP SQLCMD with (REQUEST_MAX_MEMORY_GRANT_PERCENT=5);
go
-- Start Resource Governor
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
t. To transfer all objects from the dbo schema (or any other schema) into the <sid> schema run the scripts
attached to OSS Note 1294762 – usr_change.sql or copy the output of this script and paste into a new query
window and execute
Also review 683447 - SAP Tools for MS SQL Server
u. If this message is seen (IMP) ERROR: (MSS) Must declare the table variable "@XSQL". Review note
2538423 - MSS: R3load cannot deal with @XSQL
v. If high WRITELOG and/or LOGBUFFER times are seen review the blog on FusionIO & other SSD devices on
https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/Accelerating-Oracle-gt-SQL-
Server-Migrations-with-Fusion-io/ba-p/367119 FusionIO devices are highly recommended for large
migrations to speed up writes to the Transaction Log and/or tempdb. FusionIO and SSD devices are fully
supported for use with SQL Server. Always run Windows Server 2019 or versions of Windows that support
the TRIM command
14 | P a g e
Type B
Connects to an SAP system using load balancing.
The application server will be determined at runtime.
The following parameters can be used:
DEST = <destination in RfcOpen>
TYPE = <B: use Load Balancing feature>
R3NAME = <name of SAP system, optional; default: destination>
MSHOST = <host name of the message server>
GROUP = <group name of the application servers, optional; default: PUBLIC>
RFC_TRACE = <0/1: OFF/ON, optional; default:0(OFF)>
ABAP_DEBUG = <0/1: OFF/ON, optional; default:0(OFF)>
USE_SAPGUI = <0/1: OFF/ON, optional; default:0(OFF)>
In addition to the documentation provide by SAP the following may also have to be set:
dest.SAPSystemName = "<SID>";
The service name of the message server must be defined in the ‘service’ file (<service name> = sapms<SAP
system name>).
a) The hostname of the computer (or the name that is configured with the profile parameter
SAPLOCALHOST) must be resolvable into an IP address.
b) This IP address must resolve back into the same hostname. If the IP address resolves into
more than one address, the hostname must be first in the list.
c) This resolution must be identical on all R/3 server machines that belong to the same R/3 system.
Note 364552 - Loadbalancing does not find application server
Note 1011190 - MSCS:Splitting the Central Instance After Upgrade to 7.0/7.1
27. Migration for 4.6C or lower based systems : High level process:
a. Raise an OSS message requesting a copy of the 4.6D SAP R3SETUP. (R3SETUP is no longer
available for download)
b. Prepare system according to 4.6D system copy guide
c. Install R3SETUP on the source system and update the DBMSSLIB.DLL, R3LOAD.EXE &
R3SZCHK.EXE
d. Modify R3SETUP DBEXPORT.R3S to force R3SETUP to exit just before starting the export
<xx>=R3SZCHK_IND_IND
<xx>=DBEXPCOPYEXTFILES_NT_IND
<xx>=DBR3LOADEXECDUMMY_IND_IND ***delete***
<xx>=CUSTOMER_EXIT_FOR_EXPORT ***add***
<xx>=DBEXPR3LOADEXEC_NT_IND ***delete***
<xx>=DBGETDATABASESIZE_IND_IND
[CUSTOMER_EXIT_FOR_EXPORT] ***add***
CLASS=CExitStep ***add***
EXIT=YES ***add***
e. Run R3SETUP and open DBEXPORT.R3S. Do not select the Perl based package splitter. Exit
at the customer stop point
f. Copy the Java based splitter to the R3SETUP install directory. Copy *.EXT and *.STR files from
<export dir>\DATA to the installation directory. Configure and run the Java based package
splitter tool. The package splitter will process the EXT and STR files and rename them to *.OLD
and create new EXT and STR files.
g. Copy Migration Monitor to the installation directory and run Migration Monitor to export the
system
15 | P a g e
h. R3SETUP and open DBEXPORT.R3S to continue the export steps. These steps will generate
the DBSIZE.TPL
i. Run Migration Time Analyzer to check which packages run the longest. Try to optimize the
export by starting these packages first using the OrderBy.txt file
j. Start a CMD.EXE session from the \Windows\syswow64 directory and run SETUP.BAT to install
R3SETUP on target server. Immediately update the DBMSSLIB.DLL and R3LOAD.EXE
k. Modify DBMIG.R3S with exit point
190=DBDBSLTESTCONNECT_NT_IND
200=MIGRATIONKEY_IND_IND
<xx>=CUSTOMER_EXIT_FOR_IMPORT ***add***
210=DBR3LOADEXECDUMMY_IND_IND ***delete***
220=DBR3LOADEXEC_NT_MSS ***delete***
230=DBR3LOADVIEWDUMMY_IND_IND ***delete***
240=DBR3LOADVIEW_NT_IND ***delete***
250=DBPOSTLOAD_NT_MSS
260=DBCONFIGURATION_NT_MSS
[CUSTOMER_EXIT_FOR_IMPORT] ***add***
CLASS=CExitStep ***add***
EXIT=YES ***add***
l. Run R3SETUP and open DBMIG.R3S. Exit at the customer stop point
m. Copy the <export dir> to the target system and run Migration Monitor to import the system
n. Run R3SETUP to continue the installation. If R3SETUP fails review note 965145
o. Run Migration Time Analyzer and review OrderBy.txt
p. Perform the post system copy steps as per the 4.6D system copy guide
The Azure platform also support Disk Encryption. This technology is similar to Windows Bitlocker and
can be used to encrypt the VHDs that are used by a VM.
Note: it is not necessary or beneficial to use Azure Disk Encryption and SQL Server TDE at the same
time. We recommend against storing SQL Server data and log files that have been encrypted with TDE
on disks that have been encrypted with ADE. Using both SQL Server TDE and ADE may cause
performance problems
30. Removing SAP Business Warehouse Accelerator and Replacing with SQL Server Column Store
SQL Server Column Store, Flat cube and new technologies in SAP BW 7.50 SPS 04 greatly improve
performance and have already allowed many customers to terminate the use of SAP BWA.
Review these SAP Notes and check the SAP on SQL Server blog site for recent announcements about
SQL Server Column Store
RSDDTREX_ALL_INDEX_REBUILD
BIA Index Deletion task Details :
Review this blog Very Large Database Migration to Azure – Recommendations & Guidance to Partners
The checklist covers almost all required steps; however the following should be reviewed:
1. Run the migration on large powerful VMs, then downsize the VM to the size required for normal
operations
2. Accelerated Networking is essential for good performance and during an R3load import
3. Premium Disk or UltraSSD is mandatory
4. Test ExpressRoute throughput well in advance of the migration weekend
5. Increase cluster and TCP timeout parameters as documented in the SAP on Azure Checklist
Master Note for SAP on Azure 1928533 - SAP Applications on Azure: Supported Products and Azure
VM types
Master Documentation Link https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-
started **Always Start Here**
Deployment Checklist https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-
deployment-checklist
Azure Datacenter locations https://azure.microsoft.com/en-us/global-infrastructure/regions/
Azure SAP Blog http://aka.ms/saponazureblog
To upload huge amounts of data to Azure it is recommended to use the Azure Import/Export Service
17 | P a g e