JMIGMON State Properties: Restart: Lesson: Controlling The JLOAD Processes
JMIGMON State Properties: Restart: Lesson: Controlling The JLOAD Processes
JMIGMON State Properties: Restart: Lesson: Controlling The JLOAD Processes
The Java log shows the start of JMIGMON. Startup errors can be analyzed from here. The
console log shows which JLOAD processes are completed and which have failed.
LESSON SUMMARY
You should now be able to:
● Control the JAVA migration process
LESSON OVERVIEW
In this lesson, you will learn how to perform package and table splitting for JLOAD.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform package and table splitting for JLOAD
Workflow Splitting
Table splitting is optional, but is appropriate when there are large tables that significantly
influence the export time. JPKGCTL can locate a split column automatically, but then it only
checks the fields of the primary key. To use a different field, mention it explicitly in a split rule
file. If the requested number of splits cannot be achieved, the number of splits is
automatically reduced. If this process does not result into useful WHERE conditions,
JPKGCTL gives up and no table splitting takes place.
The WHERE condition is used to select data within a specified range. For each job file of a split
table, a separate JLOAD is started.
Option Description
jsplitter.bat -sec=MJ1,jdbc/pool/MJ1
\\wdflbmt7051/sapmnt/MJ1/SYS/global/security/data/SecStore.prop-
erties
\\wdflbmt7051/sapmnt/MJ1/SYS/global/security/data/SecStore.key
-split=10M -tablesplit BC_COMPVERS:2
-tablesplit J2EE_CONFIG:4:CID;PATHHASH
-tablesplit J2EE_CONFIGENTRY:4:CID
The goal of the tool is to provide job file packages for import and export. It can be done in two
ways: package splitting and table splitting. For package splitting, the split option is used to
generate packages with the desired package split size. For table splitting, the splitrulesfile or
tablesplit option must be added. The splitrulesfile (splitting configuration file) is used for table
splitting when providing an input file containing the number of splits and the key for splitting.
It is also possible to use the tablesplit command line option in order to apply the split rules
directly. If neither a splitrules file exists nor table split rules are provided using the command
line, table splitting is omitted and the tool will only perform package splitting.
The syntax of the provided splitrulefile can be checked with option checksplitrules. In this
case, all other options are ignored and only the syntax of the rule file is verified. For more
detailed parameter descriptions, see the JSPLITTER users guide.
LESSON SUMMARY
You should now be able to:
● Perform package and table splitting for JLOAD
Learning Assessment
X A Verifies the consistency of the target database tables with *.STR file definitions
X B Compares the number of tables records in*.TOC with the target database records
X C Takes the dump file checksum an compares it with the table checksum in the
target database
X D Checks that all tables in the *.TSK files were created in the target DB
2. Which of the tools contained in the MICHECK.SAR archive need to connect to the
database?
Choose the correct answer.
X A Object Checker
X B Table Checker
X C Package Checker
3. Which check tools are started by SAPINST after MIGMON finishes the import?
Choose the correct answers.
X A Object Checker
X B Table Checker
X C Package Checker
X C Provides separate lists for the data load and index creation
5. The Migration Monitor is used to allow the configuration of a parallel export/import. Which
files are used to tell the import Migration Monitor that the export of specific package is
complete?
Choose the correct answer.
X A Log files
X C Signal files
6. If the MIGMON export or import package order is not explicitly specified, what is the
default order?
Choose the correct answer.
X A Alphabetical order
X B Undetermined order
11. Why does JMIGMON require logon information for the database?
Choose the correct answer.
12. The number of packages which are generated by JPKGCTL is related to which command
line parameter?
Choose the correct answer.
X A -package_files
X B -size_limit
X C -split
X D -split_rules
X True
X False
X A Verifies the consistency of the target database tables with *.STR file definitions
X B Compares the number of tables records in*.TOC with the target database records
X C Takes the dump file checksum an compares it with the table checksum in the
target database
X D Checks that all tables in the *.TSK files were created in the target DB
2. Which of the tools contained in the MICHECK.SAR archive need to connect to the
database?
Choose the correct answer.
X A Object Checker
X B Table Checker
X C Package Checker
3. Which check tools are started by SAPINST after MIGMON finishes the import?
Choose the correct answers.
X A Object Checker
X B Table Checker
X C Package Checker
X C Provides separate lists for the data load and index creation
5. The Migration Monitor is used to allow the configuration of a parallel export/import. Which
files are used to tell the import Migration Monitor that the export of specific package is
complete?
Choose the correct answer.
X A Log files
X C Signal files
6. If the MIGMON export or import package order is not explicitly specified, what is the
default order?
Choose the correct answer.
X A Alphabetical order
X B Undetermined order
11. Why does JMIGMON require logon information for the database?
Choose the correct answer.
12. The number of packages which are generated by JPKGCTL is related to which command
line parameter?
Choose the correct answer.
X A -package_files
X B -size_limit
X C -split
X D -split_rules
X True
X False
Lesson 1
Performing an ABAP System Migration 219
Lesson 2
Performing a JAVA System Migration 232
UNIT OBJECTIVES
LESSON OVERVIEW
In this lesson, you will learn how to perform an ABAP system migration.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform an ABAP system migration
Many migration steps can be performed in parallel in the source and target systems.
After step 3, Generate Templates for DB Size , is completed in the source system, be prepared
to start step 8, Create database in the target system . Once step 6, File transfer via FTP, tape,
USB disk, laptop is complete, steps 7-8 should already have been completed in the target
system.
In a parallel export/import scenario, the steps 4, 5, 6, 9, and 10 ran at the same time.
Before migration, download SAP Notes for homogeneous and heterogeneous system copy
and installation.
Before the technical migration at a system level, run the database statistics or other
performance-relevant activities.
To reduce the time required to unload and load the database, minimize the amount of data in
the migration source system.
Before the system copy, suspend all jobs in the source system. This prevents jobs from
running directly after the first start of the target system. Use the reports BTCTRNS1 (set jobs
into suspend mode) in the source and BTCTRNS2 (reverse suspend) in the target system.
If the target system has a new SAP SID, release all of the corrections and repairs before
starting the export.
If the database contains tables that are not in the ABAP dictionary, check whether some of
these tables also need migrating.
All SAP systems that use non-standard database objects (BI/BW, SCM/APO) require the
execution of report SMIGR_CREATE_DDL. Other system types can have non-standard
database objects as well. On Oracle for instance, the compression of tables and indexes will
trigger SMIGR_CREATE_DDL to create *.SQL contain compression statements, if the target
database is Oracle as well. We recommend running SMIGR_CREATE_DDL test-wise to check
whether it is creating *.SQL files or not. After you have called this report, do not change the
non-standard objects again before executing the report. If no database-specific objects exist,
then no <TABART>.SQL files are generated. It is important that the report completes
successfully.
You can use any file system location as the Installation Directory. SAPINST will ask for the
location. Follow the guidelines in the manual for homogeneous and heterogeneous system
copies.
Depending on the target database, additional options may be available to select in the
Database Version field.
Note:
For more information, see SAP Note: 888210 - NW 7.**: System copy
(supplementary note).
SAPINST calls R3LDCTL and R3SZCHK. The runtime of R3SZCHK depends on the version,
the size of the database, and the database type. The files *.EXT and DBSIZE.XML are
generated by R3SZCHK. The table and index sizes are written by R3SZCHK into the table
DDLOADD from where it is distributed into the *.EXT files.
To improve the unload and load times, the generated *.STR and *.EXT files are split into
smaller units. Table splitting is particularly effective at reducing the export and import
runtime on large tables.
MIGMON calls R3LOAD to create task files. If WHERE files exist, the WHERE conditions are
inserted into the *.TSK files. Afterwards, MIGMON generates the command files.
MIGMON starts a number of R3LOAD processes. A separate R3LOAD process is executed for
each command file. The R3LOAD processes write the data dump to disk. As soon as an
R3LOAD process finishes (whether successfully or due to error), MIGMON starts a new
R3LOAD process for the next command file.
● FTP
● Tape
● USB storage device
● Network storage device
Caution:
Use a safe copy method.
In cases where dump files must be copied to transportable media, make sure that the files are
copied correctly. Appropriate checksum tools are available for every operating system.
<install_dir> <PACKAGE>.CMD
<install_dir> <PACKAGE>.TSK
Note:
You must transfer all of the content of the export directory to the target system.
The file LABEL.ASC is generated during the export of the source database. SAPINST uses the
content of LABEL.ASC to determine whether the dump data is read from the correct
directory.
SMIGR_CREATE_DDL generates the SQLFiles.LST file together with the *.SQL files.
The *.CMD and *.TSK files are generated separately for export and import, so do not copy
them.
1. You log on to the SAP Service Marketplace using an S-User that is valid for the installation
number of the source system.
The customer must accept the migration key license agreement that you show, so you must
request the migration key from the customer.
Note:
If you have any problems, see SAP Note 338372.
During the migration key generation, you are asked for the classical migration or DMO
migration procedure.
The migration key is identical for all SAP Systems with the same installation number. The
migration key must match the R3LOAD version. If asked for the SAP Release, enter the
release version of the R3LOAD that you use. If you are in doubt, check the log files.
Some systems use several different host names (for example, in a cluster environment).
Generate the migration key from the node name that is listed in the (GSI) INFO section of the
R3LOAD export log (source system) and MIGKEY.log (target system).
SAPINST tests the migration key by calling R3LOAD -K (uppercase K). The file MIGKEY.log
contains the check results.
Note:
For more information, see SAP Note 338372: Migration key does not work.
● Use the current SAPINST from the downloaded Software Provisioning Manager media.
● Install the latest SAP kernel.
● Use the current SAPINST from the downloaded Software Provisioning Manager media
● Install the latest database patches.
Note:
For the latest SAP kernel and database patches, check the approved SAP product
platform and release combinations by examining the product availability matrix
(PAM).
Be generous in your database sizing during the first migration test run. The experience gained
through the test migration is better than any estimate that you calculate in advance, and you
can always adjust the values in subsequent tests.
MIGMON calls R3LOAD to create task files. If WHERE files exist, the WHERE conditions are
inserted into the *.TSK files. Afterwards MIGMON generates the command files.
After the import is completed, SAPINST starts the program DIPGNTAB to update the
imported NAMETAB tables. The SAP table field order and field lengths from the source
system will be updated with the field order and field lengths found on the target database. The
created log file dipgntab[<SAPID>].log should be examined. The log summary at the end of
the file, must not report any errors.
In many cases, changing a database system includes changing the backup mechanism. Make
sure that you are familiar with the changed or new backup and restore procedures.
After the migration, you can delete the SAP System statistics and backup information for the
source system from the target database. For a list of the tables, see the system copy guide.
To finalize the system copy, SAPINST executes various jobs in the SAP target system using
RFC (as user DDIC in client 000). Always make sure that SAPINST has executed all RFCs
without error and finished properly.
The jobs that were suspended in the source system with report BTCTRNS1 can be released on
the target system by running report BTCTRNS2.
The nonstandard database objects (mainly BW objects) that were identified on the source
system and are re-created and imported into the target system, need some adjustments. The
report RS_BW_POST_MIGRATION performs the necessary changes. With a database
migration, RS_BW_POST_MIGRATION connects to each SAP data source system to
regenerate the transfer rules. Make sure that the connected systems are in a modifiable state,
otherwise the transfer rules cannot be generated.
Note:
For more information, check the "Follow-Up Activities" chapter in the respective
system copy guide.
● Transaction SGEN
Note:
The table containing the generated ABAP loads will not export/import with the
R3LOAD system copy method.
The table in the SAP0000.STR file contains the generated ABAPs (ABAP loads) of the SAP
System. These loads are no longer valid. For this reason MIGMON does not export the table.
Each ABAP load is generated automatically the next time a program is called. The system is
slow unless all commonly used programs are generated. Use transaction SGEN to regenerate
the ABAPs.
General Tests
Hint:
Involve end users in the test process.
Take care when setting up the test environment. To prevent unwanted data communication
to external systems, isolate the system. External systems do not distinguish between
migration tests and production access. An existing checklist from a previous upgrade or
migration can be a valuable source of ideas when you develop a cut over plan. To identify any
differences between the original and the migrated system, involve end users as soon as
possible.
LESSON SUMMARY
You should now be able to:
● Perform an ABAP system migration
LESSON OVERVIEW
In this lesson, you will learn how to perform a JAVA system migration.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform a Java system migration
● Support package stacks and installed software components supported by the migration
tools
● SAPINST starts SAPCAR to archive application-specific data stored in the file system.
● As a prerequisite, SAPINST must recognize the application.
● Various *.SAR files can be created.
Note:
This is not required in SAP releases based on 7.10 and later.
If SAPINST does not recognize the application and its related files, archives are not created.
Applications that are not recognized by SAPINST may require operating system-specific
commands to copy the respective directories and files to the target system. For instructions
on how to copy these applications, see the corresponding SAP Notes.
● SAPINST calls the software deployment manager (SDM) to collect deployed file system
software components.
● Read the SDM repository (list of deployed components).
● Put file system components into the SDMKIT.JAR file.
Note:
This is not required in SAP releases based on 7.10 and later.
In SAP releases below 7.10, the SDM repository is installed in the file system and is
redeployed into the target system from the SDMKIT.JAR file.
JPKGCTL packaged job files can contain multiple tables. They are named EXPORT_<n>.XML.
Job files for a single table are named EXPORT_<n>_<TABLE>.XML. If a table was split, the
resulting job files are named the same but with a different number each. The export and
import job files are generated at the same time. They share the same job name but using the
file name prefix EXPORT or IMPORT. The sizes.xml file contains the expected export size of
each generated packaged job file and helps JMIGMON to export/import the packages by size.
The largest package is exported/imported first, then the next smaller packages, and so on.
Note:
In versions without JPKGCTL, JLOAD generates the EXPORT.XML and
IMPORT.XML by itself.
<export_dir> SOURCE.PROPERTIES
<export_dir> LABEL.ASC
<export_dir> LABELIDX.ASC
<export_dir>/APPS LABEL.ASC
<export_dir>/APPS/... Application specific
<export_dir>/DB/<target_DBS> DBSIZE.XML
<export_dir>/JDMP EXPDUMP.<nnn> / EXPDMP_<PACKAGE>.<nnn>
<export_dir>/JDMP sizes.xml (if it exists)
<export_dir>/JDMP LABEL.ASC
<export_dir>/JDMP IMPORT[_<PACKAGE>].XML
<export_dir>/JDMP EXPORT[_<PACKAGE>].XML
<export_dir>/SDM SDMKIT.JAR
<export_dir>/SDM LABEL.ASC
<export_dir>/SEC SEC.SAR
Location Content
<export_dir>/TOOLS SLTOOLS.SAR
<export_dir>/... Others
The LABELIDX.ASC file and the LABEL.ASC files are generated during the export of the
source database. SAPINST uses the content of these files to determine whether the load data
is read from the correct directory. The <export_dir>/APPS directory may be empty if, for
example, no applications are installed. Those applications that keep their data in the file
system. It is also possible that the application is unknown by SAPINST.
The SEC.SAR file contains the SecStore.properties. The SLTOOLS.SAR file contains the Java
system copy tools like: JLOAD, JSPLITTER, JMIGMON, JMIGTIME, and so on. The tool user
guides are provided as PDF documents.
Additional directories and files created in the export directory are named “Others” in the table
File Transfer.
● Adapt DBSIZE.XML
● Install an SAP Java instance
● Install database software
● Apply the latest database patches
● Adapt DBSIZE.XML
● Install an SAP Java instance
Note:
The Java Add-In installation is done in a single step with necessary system
copy activities together with the ABAP system installation and import.
● SAPINST calls the SDM to reinstall (deploy) its file system components including the SDM
Repository from the SDMKIT.JAR.
Note:
Since SAP release NetWeaver 7.10, the SDM is not longer used.
The SDM holds its repository in the file system. For a reinstallation, the content of SDMIT.JAR
is used, which contains the necessary file system components as collected in the source
system.
Note:
In versions without JPKGCTL, JLOAD generates the EXPORT.XML and
IMPORT.XML by itself.
● SAPINST automatically extract the collected application data SAPCAR archives into the
respective directory structures of the target system.
● Some SAPCAR archives may need to be copied manually to the respective directory
structures if mentioned in the system copy guide or SAP Notes.
● Copy the application-specific data manually that was not collected on the source system
because the application was not known by SAPINST.
The homogeneous and heterogeneous system copy guides and their respective SAP Notes
describe the general follow-up activities.
JAVA systems that include components that connect to an ABAP backend using the SAP
JAVA Connector (SAP Jco), for example SAP BW or SAP Enterprise Portal, need to maintain
the RFC destination. After system copy, the public-key certificates are invalid on the target
system and need to be reconfigured. Component-specific follow-up activities for SAP BW,
Adobe Document Services, SAP Knowledge Warehouse, SAP ERP, SAP CRM.
Note:
Involve end users in the test process.
Take care when setting up the test environment. To prevent unwanted data communication
to external systems, isolate the system. External systems do not distinguish between
migration tests and production access. To identify any differences between the original and
the migrated system, involve end users as soon as possible.
LESSON SUMMARY
You should now be able to:
● Perform a Java system migration
Learning Assessment
X A R3LDCTL
X B MIGMON
X C R3LOAD
X D SAPINST
X A R3LDCTL
X B MIGMON
X C R3LOAD
X D SAPINST
Lesson 1
Troubleshooting Migration Problems 242
UNIT OBJECTIVES
LESSON OVERVIEW
In this lesson, you will learn how to troubleshoot common migration problems and identify
and correct errors.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Troubleshoot common migration problems
● Identify and correct errors
R3LOAD Terminations
If you encounter unexpected terminations, check that the migration tools are compatible with
the current SAP system and database version, check that the passwords are correct, and that
the necessary changes to the root user or the environment are made before starting
SAPINST.
● No connection to database
● Incorrect or missing environment variables
● Inconsistencies between the ABAP dictionary and the database dictionary
● Insufficient storage space in the export file system
● Insufficient temporary database space for sorting
Missing or incorrect variable settings in the C-shell environment might lead to DB connection
problems with R3LDCTL, R3SZCHK, or R3LOAD. This can be the case on SAP systems where
only a non C-shell environment is used. When the system experiences such failures, test the
database connection by logging on with <SID>adm, running the C-Shell as login shell, and
execute R3LOAD -testconnect. Adjust the environment settings as required.
Check transaction DB02 or DBACOCKCPIT for missing objects on the database or ABAP data
dictionary. Determine how to deal with the missing objects before starting the export.
Remove QCM tables having no restart protocol in SE14 (invalid temp tables) before starting
the export. These tables can lead to export and import errors. Import errors are mostly of
type duplicate key.
Provide enough free disk space in the export file system. In general, assume a dump file size
will use 10% - 15% of the space in the source database. For compressed databases double
the free space. Export errors caused by insufficient free disk space can cause dump file
corruptions.
With a sorted export, make sure to have sufficient temp space available in your database (for
example, when exporting Oracle: PSAPTEMP). If there are free space related errors, increase
the temp space, reduce the number of parallel running jobs, or consider a unsorted export.
Note:
For more information, see SAP Note 9385: What to do with QCM tables.
Database connection problems can be caused by the same issues as described for export
terminations. Since the target system is normally installed from scratch, connection
problems are unlikely.
If there is not enough temporary database disk space for index creation, increase the
database storage units that are used for sorting (for example, PSAPTEMP for Oracle) or
reduce the number of parallel running R3LOAD processes.
Ensure that R3LOAD can access the directories and files of the import file system.
The R3LOAD warning level can have any value, as long as something is set. The R3LOAD
trace level is forwarded to the DBSL interface only. Useful values are between 1 and 4. The
higher the value, the more output is written. The trace can provide valuable information for
troubleshooting.
Note:
Only use these environment variables for problem analysis. The amount of log
data can quickly fill your install file system.
Note:
The warning and trace level can be set for programs like R3LDCTL, R3SZCHK,
and R3TA as well. Set the environment variable as follows: <PROGRAM
name>_WL=1 and <PROGRAM name>_TL=1 ... 4. The warning and trace level
functionality is not available in all program versions, except for R3LOAD.
The R3LOAD warning level is useful in cases where files cannot be found or opened without an
obvious reason. The list of environment variables contained in the R3LOAD log file can assist
you in the analysis of database connection problems caused by incorrect environment
settings. More information is written than usual.
Error text:
Database user SAPSR3 is not authorized to perform the INSERT.
R3LOAD response:
The R3LOAD processing package SAPPOOL.STR cannot proceed due to the SQL error.
R3LOAD ends with a negative return code.
To fix the problem, grant access authorization for table ATAB to user SAPSR3.
1. R3LOAD reads the first task of status err or xeq from the SAPPOOL.TSK file. The error
occurred during the load, and not while creating the table, so the table contents must be
removed first. To do this, R3LOAD executes the truncate/delete SQL statement, which is
defined in the DDL<DBS>.TPL file.
● If the database returns a duplicate key error while creating a primary key or unique index,
R3LOAD stops on error.
Some tables have a primary key in the ABAP DDIC, but not on the source database. This may
be intentional, so ask the customer if there is a reason, and check for SAP Notes.
No *.SQL file was found by R3LOAD, or SMIGR_CREATE_DDL was not called on the source
system.
In the source database, silently corrupted primary keys can lead to duplicate exported data
records. In this case, the number of records in the *.TOC file is larger than the number of rows
in the source table. Verify or repair the primary key and export the table again.
SAP ABAP Systems do not write data with trailing blanks into table fields (just to fill them up).
External programs that write directly into SAP System tables may insert trailing blanks for
filling a field completely. R3LOAD always exports table data without trailing blanks. If same
data exists in the source database with and without trailing blanks (because the SAP system
modified the data), duplicate key errors occur at import time. In this situation, clean up the
source table and export again.
In an OS crash or power failure, it is difficult to ascertain which OS buffers were flushed to the
open files, or what happened to the entire file. The example in the list describes a rare but
possible situation.
Exports into a network mounted file system can lead to similar symptoms if the network
connections breaks. Abnormal terminations are caused by external events, not by a database
error, or file permission problem.
The figure Power Failure or OS Crash at Export Time: Small Tables shows a power failure or
operating system crash. In this example, the OS is unable to flush all of the file buffers to disk,
so a mismatch occurs between the dump file and the *.TOC or *.TSK file content. R3load
exports TABLE08, and the *.TOC or *.TSK is updated, but the data is not yet written by the
OS into the dump file. The *.TOC file contains block 48, as the last data block in the dump file.
After restarting the export process, R3LOAD looks at the *.TSK file to get the last exported
table, which is TABLE08. It then reads the *.TOC file for the last write position. R3LOAD opens
the dump file and does a search to the next write position at block 49 (which is behind the end
of the file in this case). The next table to export is TABLE09, which will be stored in the dump
file starting at block 49. The gap between block 42, last physical write, and block 49, contains
random data.
It is unclear whether the *.TSK file contains more or fewer entries than the corresponding
dump file, so using the merge option might be risky. This problem is rare but it can happen to
small tables that are completely exported with a few blocks that need to be flushed to the
dump file; however, the action was never completed because the system crashed. The entries
in the *.TOC or *.TSK files were previously written.
The figure Power Failure or OS Crash at Export Time: Large Tables shows a power failure or
operating system crash. In this example, the OS is unable to flush all of the file buffers to disk,
so a mismatch occurs between the dump file and the *.TOC or *.TSK file content. R3LOAD
finished the export of the large TABLE02 , and the *.TOC or *.TSK is updated, but the last 8
data blocks are not yet written by the OS into the dump file. The *.TOC file contains block
320.000 as the last valid data block in the dump file.
After restarting the export process, R3LOAD looks into the *.TSK file to find the last exported
table, which is TABLE02, then it is reading the *.TOC file for the last write position. R3LOAD
opens the dump file and does a seek to the next write position block 320.001 (which is behind
the end of the file in this case). The next table to export is TABLE03, which will be stored in the
dump file starting at block 320.001. The gap between block 319.992 (last physical write) and
320.001 contains random data.
It is unclear whether the *.TSK file contains more or fewer entries than the corresponding
dump file, so the use of the merge option might be risky. This problem is rare but it can
happen to tables that are exported completely where only the last blocks have to be flushed to
the dump file, and the entry of *.TOC or *.TSK files are written before.
Solution
Use the following solution when there is a power failure or OS crash when exporting large
tables:
● Repeat the export of all packages that were unloading when the system crashed.
● Remove the corresponding *.LOG, *.CMD, *.TSK, *.BCK, *.TOC, and dump files.
● Restart SAPINST and MIGMON.
Note:
In this situation, using the merge option is risky because it is unclear whether
the *.TSK file contains more or fewer entries than the corresponding dump file.
Only remove files for packages that were not completed at the time of the system crash. We
do not recommended the use of the merge option. The export of all involved packages should
be repeated.
● The problem:
- A space shortage in the file system causes the export to stop but after increasing the
space on the file system, the export restarts without problems.
- The import stops on RFF, RFB, checksum, or cannot allocate buffer errors.
● The reason:
- At the first export termination, the *.TOC and *.TSK files were updated before the dump
file was written.
- The *.TOC file provided a dump file seek position for the restarted export process
behind the end of file and continued writing from there.
● The solution:
- Use the same procedure as for OS crashes or power failures during export
The figure R3LOAD Export Error Due to Space Shortage: Example shows unload terminations
due to a shortage of space in the file system. After increasing the space of the file system, the
export restarts and finishes without further problems. At import time, R3LOAD stops with a
checksum error.
Export Rules
● Ensure that there is sufficient disk space available for the data dump.
● When using NFS, check SAP Note 2093132 for safe mount options.
● Run MIGMON in the background if running standalone.
● If you are unsure whether to restart or repeat the export, then it is best to repeat the entire
export process.
● The problem:
After a power failure or operating system crash, the *.TSK files that were active at the time
of the termination may be inconsistent because the operating system was unable to flush
the file buffers in time.
● The symptom:
R3LOAD attempts to restart the import, but encounters another error as soon as it tries to
process the next table.
In the case of an OS crash or a power failure, it is difficult to ascertain which OS buffers were
flushed to the open files or what happened to the entire file.
In the figure Power Failure or OS Crash at Import, a power failure or operating system crash
occurred. The operating system could not flush all of the file buffers to disk, so a mismatch
occurred between database content and the *.TSK file. R3LOAD imported TABLE08, but the
*.TSK file only contains information regarding TABLE05 and its primary key.
Solution
Use the following solution when there is a power failure or OS crash at import:
It is necessary to merge *.TSK and *.BCK. All entries that are not marked as executed (xeq)
are set to error (err). The described problem only applies to small tables.
● The problem:
- In rare cases, an attempt to delete the data that has already been loaded fails during a
restart.
● The symptoms:
- The creation of the primary key fails with database message Duplicate key .
- The number of table rows is larger than recorded in the corresponding
SAP<TABART>.TOC file.
If the data deletion fails for a large table when restarting an import, R3LOAD assumes that the
table is empty and starts the data load. The creation of the primary key stops with a duplicate
key error.
Solution
Use the following solution when there is a duplicate key problem after restarting import:
● Change the contents of the task file by hand and edit the affected <PACKAGE>.TSK file.
Change:
D <Table Name> I ok
P <Key Name> C err
To:
D <Table Name> I err
P <Key Name> C err
● If you do not perform this modification, the restart always starts at the Create Primary Key
command.
● The problem:
- R3LOAD stops during import. Various error messages may appear.
● The symptoms (error messages):
- (RFF) ERROR: SAPxxx.TOC is not from same export as SAPxxx.001
Data block corrupted or files of different exports are mixed
Note:
For more information, see SAP Notes 143272: R3LOAD (RFF) ERROR: buffer
(xxxxx kB) to small; and 438932: R3LOAD Error during restart (export).
● The reasons:
- The dump file was corrupted either during the export, or during the file transfer, or
there were read errors when accessing the dump file via NFS.
● The solutions:
- Compare original dump file against the transferred one
- If the files are different, copy the original file once more (in a safer way)
- If the files are identical, export the package once more and copy again
If the source and the target files are identical, then the contents are corrupt.
To analyze the reasons for corrupted dump files, check the export log files. There may be an
error during the export or a restart situation. Use different checksum tools to compare
original and copied files, or use different algorithms, if possible.
● All the values for the expected database size calculated by R3SZCHK are estimates.
● It is often necessary to make enlargements to the computed target database size in the
DBSIZE.XML file.
Be generous with database space during the first test migration (allow automatic grow up to
20%). Adjustments for the production migration can be determined from the results of the
test migrations.
● Problem
- After a R3LOAD system copy, you realize that tables or indexes have been loaded into
the wrong database storage unit.
● Check
- Is the TABART database storage unit assignment correct in the source system (tables
TA<DBS>, IA<DBS>, TS<DBS>)?
- Have the relevant tables and indexes been assigned to the correct TABART in the
source system (table DD09L)?
- Does the DDL <DBS>.TPL file contain the correct TABART/tablespace mapping?
- Do the *.SQL files contain the right TABART names for the object?
In many cases, tables and indexes are moved to database storage units created by
customers, without maintaining the appropriate ABAP dictionary tables. The result is that the
files DDL<DBS>.TPL. and *.STR contain the original SAP TABART settings. Check the *.CMD
files to find out which DDL<DBS>.TPL file has been used. Oracle databases running on the old
tablespace layout are automatically installed with the reduced tablespace set on the target
system.
● Export Failure
- Terminate if data of a certain object cannot be exported.
- A stop-on-error strategy is used.
● Export Restart
- The existence of an EXPORT[_<PACKAGE>].STA file flags a restart situation.
- Items are read from EXPORT[_<PACKAGE>].STA.
- Exported objects with the status OK are skipped. The first item with an error status is
used to restart with.
- Export continues by writing the next header for metadata or data.
● Import Failure
- If a database object cannot be created or data cannot be loaded, JLOAD tries to
proceed with the next object.
- A continue-on-error strategy is used.
Note:
Even if a table fails to import, it makes sense to save time by proceeding
with other tables. A later restart only deals with the erroneous objects.
● Import Restart
- The existence of an IMPORT[_<PACKAGE>].STA file flags a restart situation.
- Items are read from IMPORT[_<PACKAGE>].STA.
- Imported objects with the status OK are skipped. The first item with an error status is
used to restart.
- In the database, the erroneous item is cleaned up by deleting loaded data or dropping
the database object.
If the last unsuccessful action was to load data, the restart action is to delete data. If the last
unsuccessful action was to create an object (table or index), the restart action is to drop a
table or index.
LESSON SUMMARY
You should now be able to:
● Troubleshoot common migration problems
● Identify and correct errors
Learning Assessment
1. If R3LDCTL cannot connect to the database when started by SAPINST in UNIX, you should
check which of the following issues?
Choose the correct answers.
X B Check that the <SID>adm database environment settings for the C-Shell are
correct.
2. If an R3LOAD process fails to create the secondary index of a table, what is the restart
activity when running R3LOAD once more?
Choose the correct answer.
1. If R3LDCTL cannot connect to the database when started by SAPINST in UNIX, you should
check which of the following issues?
Choose the correct answers.
X B Check that the <SID>adm database environment settings for the C-Shell are
correct.
2. If an R3LOAD process fails to create the secondary index of a table, what is the restart
activity when running R3LOAD once more?
Choose the correct answer.
Lesson 1
Performing Unicode Conversions 259
Lesson 2
Performing a Near Zero Down Time System Copy 262
Lesson 3
Describing SAP HANA Storage 271
Lesson 4
Performing a HANA Migration 274
Lesson 5
Performing a Combined Upgrade and Database Migration to SAP HANA 292
UNIT OBJECTIVES
LESSON OVERVIEW
In this lesson, you will learn how to perform Unicode conversion.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform Unicode conversions
Unicode conversions are available starting with NetWeaver 6.20. The best way to begin
learning about the Unicode conversion thematic is to read SAP Note 1322715, Unicode FAQs.
This note provides current links to Unicode articles and presentations.
The Unicode Collection Note provides a list of information about potential problems and their
solution. The SAP Notes mentioned in the collection note are release-dependent and do not
apply to all SAP system versions.
SAP announced an end of non-Unicode support. SAP Note 2033243 describes which SAP
system version is the last non-Unicode system and the last supported SAP kernel.
● The Unicode conversion uses SAPINST and R3load to export the source and to import the
target system.
● There are special requirements for preparations in the source system.
● There is a minimum support package level requirement.
● The code page is always converted at export time.
● The CPU, memory, and disk storage consumption increase after the conversion.
● A Unicode conversion can be combined with a heterogeneous system copy.
Note:
Check the current status and release strategy of the Unicode conversion in the
SAP Service Marketplace. See the quick links UNICODE, UNICODE@SAP, and
the appropriate SAP Notes.
● An upgrade and Unicode conversion can be executed at the same time (target systems
below SAP NetWeaver 7.50).
Unicode SAP Systems require SAP NetWeaver Application Server 6.20 and above.
The Unicode Conversion is only applicable if a minimum Support Package Level is installed.
Check the Unicode Conversion SAP Notes for more information.
R3LOAD converts the data to Unicode while the export is running. Since it is not sufficient for
R3LOAD to read the raw data, additional features are implemented regarding the data
context, which available in the source system only. Specific Unicode preparation steps delete
obsolete data, fill various control tables, and generate the Unicode Nametab, before the
export can take place.
Note:
A Unicode conversion at import time is not supported for customer systems.
Very large databases, short downtimes, and slow hardware may require an incremental
system copy approach.
Review ABAP coding using transaction UCCHECK. Byte offset programming or dynamic
programming (data types determined dynamically during program execution) can require
significant effort.
The Unicode Conversion of MDMP (Multi Display Multi Processing) systems require more
effort compared to the ones for SCP (single code page) systems and are increasing with the
number of installed code pages.
In MDMP systems, not all tables provide a language identifier for their data. For this data, a
vocabulary must be created to allow an automated Unicode conversion. The creation and
maintenance of this vocabulary is a time consuming task. The involvement of experienced
consultants shortens this process significantly.
During the Unicode export of MDMP systems, R3LOAD writes *.XML files that are used for
final data adjustments in the target system (transaction SUMG). The *.XML files are only
written to the SCP (Single Code Page) systems for informational purposes during the Unicode
conversion export.
MDMP / Unicode interfaces require significant effort. We recommend minimizing the number
of language-specific interfaces, if possible.
Interfaces to non-SAP systems need an in-depth analysis to identify their code page
requirements.
LESSON SUMMARY
You should now be able to:
● Perform Unicode conversions
LESSON OVERVIEW
In this lesson, you will learn how to run a Near Zero Down Time system copy.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the Near Zero Down Time system copy methodology
● For large databases that cannot be migrated using standard MIGMON R3LOAD system
copy methods.
● Applicable since version 4.6C.
Note:
The availability is database dependent, check with SAP for more information.
Note:
For the current MDS project handling, see SAP Note 693168.
The NZDT method to migrate SAP systems is an SAP MDS service that uses an incremental
migration approach. This service was developed to migrate very large databases. Compared
to the standard system copy procedure, it can reduce the technical system copy downtime
significantly to few hours or less.
In BI and SCM systems, the creation and deletion of tables (with or without a primary key) and
indexes, as well as structural DDIC changes are quite common. Contact SAP to determine if a
customer-specific NZDT project is possible.
The NZDT method should only be executed by specially trained SAP consultants. It is suitable
for heterogeneous system copies and Unicode conversions (or a combination of both).
Other technical maintenance events like upgrades or the update of enhancement packages
can be performed with this type of Near Zero Downtime procedure, but these topics are not
covered in this course.
The NZDT Workbench is a standalone ABAP NetWeaver application used to configure and
control the migration process between the source and the target system. During the table
synchronization, the data stream runs through the NZDT Workbench. A Unicode conversion
performs a data translation to Unicode.
The Proxy system is an MCOD NetWeaver system installation in the same database as the
SAP target system. It works in its own DB schema. It receives the data from the Workbench
and stores it in the tables of the target DB schema.
To record table changes, inserts, updates, and deletes, triggers are created for nearly all
tables in the source system. A trigger fires as soon as the content of a table is changed and
inserts the primary key of the changed record into the related log table.
A reliable save synchronization protocol ensures the data consistency between source and
target system. It is a robust implementation that restarts after a transfer aborts, for example,
an issue caused by network problems or unexpected system shutdowns of the target system.
The NZDT procedure (based on the DMIS Add-on) is a type of toolbox, so migration scenarios
can be adapted to specific needs.
● Implements insert, update, and delete triggers for almost all tables
● Only a few tables are excluded from triggers
● R3LOAD-based clone export
● Online delta replay and final delta replay
● R3LOAD-based export and import of tables without triggers
● Transport restrictions apply to ABAP DDIC objects
● Replication of ABAP DDIC changes (only available for special projects)
The table insert, update, and delete triggers will be implemented on almost all tables. The
online delta replay (ODR) transfers the recorded data changes of the triggered tables to the
target system. During the downtime, a final delta replay (FDR) takes place, transferring the
records that were not already synchronized or were changed during the ramp down.
In order to prevent performance issues, tables with an exceptionally high update rate can be
omitted from the triggers.
Tables without triggers need to be imported and exported using R3LOAD during the
downtime. The selection of these tables must be done carefully because they can affect the
minimal achievable technical downtime. Tables containing a large amount of data should be
avoided from this selection when possible.
Tables without primary keys must be imported and exported during the downtime because
they cannot provide a log table entry that can be used to identify a unique source table record
when synchronizing the data between the source and target system.
R3LOAD is used to export the clone system, which is created after the triggers are established
on the source system. The exported data is imported to the target system to start the data
synchronization.
The ODR transfers the logged table changes from the source to the target system, while the
source system is continuously productive.
The FDR takes care of the remaining data that could not be transferred by the ODR. It is
performed during the system downtime after the source system was ramped down and there
is no activity on the system.
The technical downtime duration depends on the following conditions
● The amount of data which was not transferred by the ODR
● The amount of data that must be exported by R3LOAD
● The length of time it takes to perform the consistency check between the source and the
target system (row counting)
The achievable technical downtime is small compared to a conventional database export and
import.
The table structures of tables having triggers can not change during the NZDT process.
Transports intended to modify the structure of triggered tables must be postponed until no
NZDT triggers are active, for example, between test cycles or after the migration is finished.
A NZDT project with ABAP DDIC replication (including replication of creation, modifications,
and deletion of tables, structures) needs to be discussed with SAP. Secondary indexes cannot
be replicated and must be generated as a post migration task. It must be determined if it can
be supported on the respective customer system. This creates additional efforts and requires
the involvement of the DMIS development.
First, the NZDT workbench needs to be installed. It is a separate NetWeaver system of the
same release and basis support package as the customer source system with an applied
DMIS add-on.
Next, the same DMIS add-on used on the NZDT workbench must be installed on the source
system. The add-on installation does not modify existing repository or data dictionary
objects, so it can be applied without downtime.
Using the NZDT Workbench, triggers and logging tables are created to record the inserts,
updates, and deletes in the source system database. There must be enough free space for the
logging tables in the source database (usually several hundred GB).
As soon as the triggers are active, the amount of DB logging increases and is at least double
the size than without triggers. A further increase is to be expected when starting the data
synchronization. The DB log backup must be able to handle the larger amount of logs
otherwise it will cause a DB standstill (archiver stuck).
After the triggers are established in the source system, the database is cloned (copied using
backup/restore or by advanced storage copy techniques). All tables of the clone are exported
using R3LOAD and imported into the target system DB schema.
The target database is used to run two independent SAP systems, each of them on its own DB
schema. The proxy system can be a copy of the NZDT workbench system. This simplifies the
installation as the proxy system requires the same setup. It is an MCOD installation in the
same database as the migration target system.
The migration target system is created from the export of the clone system, but it is not
started until the final data synchronization is complete.
After the import into the target DB schema is finished, table aliases are created to provide a
direct proxy system access to the tables in the target schema. To distinguish the local proxy
tables from the target system tables, the aliases are created using the prefix /1LT/*.
While the export/import is running, preprocessing takes place on the source system. It
examines the NZDT log table entries by searching for records which were modified several
times. When the same record is updated more than once or is deleted afterwards, only the
final updated record or the final deletion needs to be transferred. All intermediate states can
be marked as processed. Preprocessing causes additional database transaction log entries
for every marked record; generally this mechanism reduces the amount of transfer data by
30% to 40%. The same mechanism is used during the online delta replay implicitly.
The online delta replay table synchronization can be prioritized and parallelized individually
for each table. The number of parallel running jobs can be adapted depending on the current
load of the system. Usually the jobs are distributed over multiple applications servers.
Tables having a high update frequency and a large amount of log entries will be set to a high
transfer priority and transferred in parallel utilizing multiple batch jobs the same time.
The logging tables contain the primary keys of the changed records, additional information
about the type of change (insert, update, delete), a time stamp, and the process status. Every
time a row is inserted, updated, or deleted, a database trigger is fired to update the logging
table.
The synchronization jobs scan the logging tables (in the example, TAB01' and TAB02') for
unprocessed records. These records are transmitted using the NZDT workbench to the proxy
system. The proxy system updates the data in the target system schema by utilizing the
previously created aliases.
A safe protocol makes sure that only the records are marked as completed (processed) that
have been successfully updated in the target database.
In case of a Unicode conversion, the translation to Unicode is performed in the NZDT
workbench.
After the online replay is finished to 99.9%, the source system must be ramped down for the
final offline delta replay. This means that there can not be any system activity: users are
locked out, no jobs are running, interfaces are stopped, and so forth to avoid further data
changes.
Last minute jobs or prework must be avoided because there might not enough time to
transfer all the changes before ramp down. The remaining NZDT logging entries needs to be
handled by the final delta replay increasing the technical downtime.
After the source system is ramped down, the offline (final) delta replay takes place. This
transfers the data for those tables where the online replay was not 100% completed or data
was changed during ramp down.
The remaining tables that do not have triggers because of technical or performance reasons
are exported and imported by R3LOAD. It should only take a few minutes.
After comparing the row count of the tables that were transferred by the online and the final
delta replay, the technical migration is finished. The target system can be started, and the
source system and the proxy system can be stopped now.
The post migration activities are in addition to the shutdown of the source, workbench, and
the proxy system, and include the removal of the proxy instance DB schema and all the
generated programs that were created in order to transfer the data. The dictionary definitions
of the NZDT log tables are also removed from the target system.
Usually the migration scenario is tested at least two times.
There is no downtime available to perform a final delta replay from the productive source
system during a test run, so the source database is copied a second time after completing the
online delta replay. The resulting clone system is isolated to avoid system activity and is
started to perform the final delta replay simulation as it would run from the production
system including the R3LOAD export and import of the remaining tables.
The NZDT workbench can be used multiple times, but the proxy and target system needs to
be reinstalled for each run. It makes sense to restore them from a backup taken after the
initial system setup.
LESSON SUMMARY
You should now be able to:
● Describe the Near Zero Down Time system copy methodology
LESSON OVERVIEW
In this lesson, you will learn how to describe SAP HANA storage.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe SAP HANA storage
SAP HANA is an in-memory database and application platform, which, for many operations, is
10 to 1000 times faster than a regular database. This allows for a simplification of design and
operations and real-time business applications.
SAP HANA has numerous engines providing various algorithms for in-memory computing and
application libraries to develop applications that run directly on SAP HANA. The libraries are
linked dynamically to the SAP HANA database kernel.
A SAP HANA appliance is a specific combination of hardware and SAP database software.
The SAP HANA appliance software can only be installed by SAP certified hardware partners
on validated hardware running a specific operating system. The appliance comes
preconfigured (hardware and software installed). It provides a fast implementation because
the system is ready to run out of the box.
With the introduction of SAP HANA Tailored Datacenter Integration (TDI), customers are
allowed to install their own SAP HANA system on certified hardware. This is more flexible than
the SAP HANA appliance approach. To ensure quality and consistency in the installations in
the customer datacenter, SAP has setup a special certification program for installing SAP
HANA systems.
Note:
For more information see the following SAP Notes:
● 1599888 SAP HANA: Operational Concept
● 1514967 SAP HANA: Central Note
● Column store
● Row store
● Compression
● Insert only on delta merge
Opposed to classic databases, SAP HANA holds most table data in memory. Not only does
this require a large amount of memory, but it needs an efficient way to store and retrieve the
data. For that purpose, most of the data is stored in columns (column store table) and not by
rows (row store tables) like in conventional database architectures. This concept allows for a
very high compression ratio because the number of repeated bytes in the same column but in
different rows of a table is much higher than inside the row itself. This allows, in combination
with advanced data selection strategies, a very fast in-memory data access.
Write operations on compressed data would be costly since it requires reorganizing the
storage structure. Therefore, write operations in a column store do not directly modify
compressed data. All changes go into a separate area called the delta storage. The delta
storage exists only in main memory. Only delta log entries are written to the persistence layer
when delta entries are inserted. The delta merge operation (updating the compressed column
store) is decoupled from the execution of the transaction that performs the changes. This
happens asynchronously at a later point in time.
In depth delta merge information can be found at http://scn.sap.com/docs/DOC-27558.
Some tables still need to be stored in rows because of their small size or usage (for example.
fast change frequency). R3LOAD creates tables for column or row store, while importing to a
SAP HANA database. Prior to SAP NetWeaver 7.40, SAP provides a list of row store tables to
be used during migration import, generated by SMIGR_CREATE_DDL. Starting with SAP
NetWeaver 7.38 and 7.40, the Row Store/Column Store information is managed in the ABAP
DDIC and R3LDCTL, which extends the table type with a row/column store flag in *.STR files.
Row store tables are held completely in memory. Column store table content is loaded on
demand or can be preloaded (for example by SQL scripts).
● Table partitioning
● Multicore architecture