JMIGMON State Properties: Restart: Lesson: Controlling The JLOAD Processes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70
At a glance
Powered by AI
JMIGMON state files are used to manage the packages that are already exported, currently in use, or terminated on error. Changing a package state from minus (-) to zero (0), will force JMIGMON to restart the job.

JMIGMON writes two log files - jmigmon.java.log and jmigmon.console.log. The Java log shows the start of JMIGMON and any startup errors. The console log shows which JLOAD processes are completed and which have failed.

The two log files that JMIGMON writes are jmigmon.java.log and jmigmon.console.log.

Lesson: Controlling the JLOAD Processes

JMIGMON State Properties: Restart

Figure 136: JMIGMON Control and Output Files

If there is an export or import error, inspect the jmigmon.console.log. More detailed


information can be found in the relevant job log.
To manage the packages that are already exported, currently in use, or terminated on error,
use the JMIGMON state files. Changing a package state from minus (-) to zero (0), will force
JMIGMON to restart the job.
When restarting JMIGMON after all packages are completed, but some terminated with
errors, the unsuccessful packages are processed automatically. There is no need to set the
state from "-" to "0".
The restart behavior of JMIGMON is basically the same as described for MIGMON.

Table 54: Example export_jmigmon_states


EXPORT_METADATA.XML=+ Finished
EXPORT_13_J2EE_CONFIGENTRY.XML=+ Finished (split table)
EXPORT_14_J2EE_CONFIGENTRY.XML=+ Finished (split table)
EXPORT_0.XML=+ Finished

Table 55: Example import.jmigmon_states:


IMPORT_METADATA.XML=+ Finished
MPORT_13_J2EE_CONFIGENTRY.XML=+ Finished (split table)
IMPORT_14_J2EE_CONFIGENTRY.XML=? Running (split table)
IMPORT_0.XML=- Failed

© Copyright. All rights reserved. 203


Unit 8: Advanced Migration Techniques

JMIGMON Log Files


JMIGMON writes the following two log files:
● jmigmon.java.log
● jmigmon.console.log

The Java log shows the start of JMIGMON. Startup errors can be analyzed from here. The
console log shows which JLOAD processes are completed and which have failed.

LESSON SUMMARY
You should now be able to:
● Control the JAVA migration process

© Copyright. All rights reserved. 204


Unit 8
Lesson 8
Performing Package and Table Splitting for
JLOAD

LESSON OVERVIEW
In this lesson, you will learn how to perform package and table splitting for JLOAD.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform package and table splitting for JLOAD

JPKGCTL Package and Table Splitting

● Provides a single tool for JLOAD packages and table splitting


● Creates packages according to size
● Splits tables based on rule file or command line parameters

JPKGCTL Control Files


The split parameter defines the size limit for JLOAD packages. JPKGCTL continues to add
tables to a package until it reaches the size limit. The number of packages is related to the
size limit parameter. A small size results in many package files compared to a large size that
creates only a few packages. If a table is equal to or larger then the given size, the package file
only contains this single table.
You only require the splitrules file if you are planning table splitting. It can contain entries in 3
different formats. If only the number of splits is specified, all fields of the primary key are
checked for highest selectivity. In the case where a single field is explicitly given, only this field
is used for splitting. If multiple fields are provided, the most selective field is used.

© Copyright. All rights reserved. 205


Unit 8: Advanced Migration Techniques

Workflow Splitting

Figure 137: JPKGCTL (JSPLITTER): Workflow

SAPINST generates the jsplitter_cmd.properties file according to user input by SAPINST.


JPKGCTL connects to the database, reads the database objects definitions, and calculates
the sizes of items to be exported. The tables are distributed to the JLOAD job files (packages).
The distribution criteria is the package size as provided in the jsplitter_cmd.properties file.
After all packages are created, the sizes.xml file containing the expected export size of each
package is written. JMIGMON uses the content to start the export/import in the package size
order.

© Copyright. All rights reserved. 206


Lesson: Performing Package and Table Splitting for JLOAD

Table Split Strategy

Figure 138: JPKGCTL (JSPLITTER): Table Split Strategy

Table splitting is optional, but is appropriate when there are large tables that significantly
influence the export time. JPKGCTL can locate a split column automatically, but then it only
checks the fields of the primary key. To use a different field, mention it explicitly in a split rule
file. If the requested number of splits cannot be achieved, the number of splits is
automatically reduced. If this process does not result into useful WHERE conditions,
JPKGCTL gives up and no table splitting takes place.

© Copyright. All rights reserved. 207


Unit 8: Advanced Migration Techniques

Job Files for Split Tables

Figure 139: EXPORT<PACKAGE>.XML of a Split Table

The WHERE condition is used to select data within a specified range. For each job file of a split
table, a separate JLOAD is started.

JPKGCTL Command Line Parameters

Table 56: JPKGCTL Command Line Parameters


Option Description

-sec List that includes the SAPSID, data source


name[,SecureStore property file,Secure-
Store key file][,SecureStore key phrase]
-split Size of the split package with tables. The size
can be a number of bytes (for example,
1048576, 200M, 8G, and so on)
-dataDir Output data directory
-log Log file with program output messages and
errors
-tablesplit Table split rules provided as command line
parameters
-splitrulesfile Table split rules provided in a file

© Copyright. All rights reserved. 208


Lesson: Performing Package and Table Splitting for JLOAD

Option Description

-checksplitrules Checks if the syntax of the provided splitfu-


lesfile
-help Prints help options for the parameters and
their usage

Table 57: Command Line Example


Command Line Example

jsplitter.bat -sec=MJ1,jdbc/pool/MJ1
\\wdflbmt7051/sapmnt/MJ1/SYS/global/security/data/SecStore.prop-
erties
\\wdflbmt7051/sapmnt/MJ1/SYS/global/security/data/SecStore.key
-split=10M -tablesplit BC_COMPVERS:2
-tablesplit J2EE_CONFIG:4:CID;PATHHASH
-tablesplit J2EE_CONFIGENTRY:4:CID

The goal of the tool is to provide job file packages for import and export. It can be done in two
ways: package splitting and table splitting. For package splitting, the split option is used to
generate packages with the desired package split size. For table splitting, the splitrulesfile or
tablesplit option must be added. The splitrulesfile (splitting configuration file) is used for table
splitting when providing an input file containing the number of splits and the key for splitting.
It is also possible to use the tablesplit command line option in order to apply the split rules
directly. If neither a splitrules file exists nor table split rules are provided using the command
line, table splitting is omitted and the tool will only perform package splitting.
The syntax of the provided splitrulefile can be checked with option checksplitrules. In this
case, all other options are ignored and only the syntax of the rule file is verified. For more
detailed parameter descriptions, see the JSPLITTER users guide.

LESSON SUMMARY
You should now be able to:
● Perform package and table splitting for JLOAD

© Copyright. All rights reserved. 209


Unit 8

Learning Assessment

1. What task does the Table Checker perform?


Choose the correct answer.

X A Verifies the consistency of the target database tables with *.STR file definitions

X B Compares the number of tables records in*.TOC with the target database records

X C Takes the dump file checksum an compares it with the table checksum in the
target database

X D Checks that all tables in the *.TSK files were created in the target DB

2. Which of the tools contained in the MICHECK.SAR archive need to connect to the
database?
Choose the correct answer.

X A Object Checker

X B Table Checker

X C Package Checker

3. Which check tools are started by SAPINST after MIGMON finishes the import?
Choose the correct answers.

X A Object Checker

X B Table Checker

X C Package Checker

4. The Time Analyzer does which of the following?


Choose the correct answers.

X A Shows the longest running table in a package.

X B Calculates the export and import runtime for each package

X C Provides separate lists for the data load and index creation

X D Provides output files: TXT, DOCX, and PDF

© Copyright. All rights reserved. 210


Unit 8: Learning Assessment

5. The Migration Monitor is used to allow the configuration of a parallel export/import. Which
files are used to tell the import Migration Monitor that the export of specific package is
complete?
Choose the correct answer.

X A Log files

X B Table of contents files

X C Signal files

6. If the MIGMON export or import package order is not explicitly specified, what is the
default order?
Choose the correct answer.

X A Alphabetical order

X B Undetermined order

X C By package size order

X D By dump size order

7. What are the properties of the R3TA splitter?


Choose the correct answers.

X A Creates database independent WHERE conditions

X B Writes all WHERE conditions into a single output file

X C Must be started in the export downtime

X D Can split by number of rows

X E Cannot be used with declustering

8. What are the properties of the Oracle PL/SQL splitter?


Choose the correct answer.

X A In general, it is only usable if the target database is Oracle.

X B ROWID splitting must be done before the export downtime.

X C ROWID splitting is quite fast.

X D Not available for use with SAPINST

© Copyright. All rights reserved. 211


Unit 8: Learning Assessment

9. The WHERE splitter is used to do what?


Choose the correct answer.

X A Split R3LOAD *.STR files into smaller units

X B Create *.WHR conditions

X C Split R3TA splitter output

X D Split Oracle PL/SQL splitter output

10. The Distribution Monitor is used to do what?


Choose the correct answers.

X A Assign packages to application servers

X B Provide a central console to monitor the application servers

X C Increase or decrease the number of R3LOAD processes dynamically

X D Migrate ABAP and Java systems

11. Why does JMIGMON require logon information for the database?
Choose the correct answer.

X A It connects itself to the database.

X B It starts JLOAD, which connects to the database.

X C It starts JPKGCTRL, which connects to the database.

X D It starts JSIZECHK, which connects to the database.

12. The number of packages which are generated by JPKGCTL is related to which command
line parameter?
Choose the correct answer.

X A -package_files

X B -size_limit

X C -split

X D -split_rules

© Copyright. All rights reserved. 212


Unit 8: Learning Assessment

13. Table splitting is not required.


Determine whether this statement is true or false.

X True

X False

© Copyright. All rights reserved. 213


Unit 8

Learning Assessment - Answers

1. What task does the Table Checker perform?


Choose the correct answer.

X A Verifies the consistency of the target database tables with *.STR file definitions

X B Compares the number of tables records in*.TOC with the target database records

X C Takes the dump file checksum an compares it with the table checksum in the
target database

X D Checks that all tables in the *.TSK files were created in the target DB

2. Which of the tools contained in the MICHECK.SAR archive need to connect to the
database?
Choose the correct answer.

X A Object Checker

X B Table Checker

X C Package Checker

3. Which check tools are started by SAPINST after MIGMON finishes the import?
Choose the correct answers.

X A Object Checker

X B Table Checker

X C Package Checker

© Copyright. All rights reserved. 214


Unit 8: Learning Assessment - Answers

4. The Time Analyzer does which of the following?


Choose the correct answers.

X A Shows the longest running table in a package.

X B Calculates the export and import runtime for each package

X C Provides separate lists for the data load and index creation

X D Provides output files: TXT, DOCX, and PDF

5. The Migration Monitor is used to allow the configuration of a parallel export/import. Which
files are used to tell the import Migration Monitor that the export of specific package is
complete?
Choose the correct answer.

X A Log files

X B Table of contents files

X C Signal files

6. If the MIGMON export or import package order is not explicitly specified, what is the
default order?
Choose the correct answer.

X A Alphabetical order

X B Undetermined order

X C By package size order

X D By dump size order

7. What are the properties of the R3TA splitter?


Choose the correct answers.

X A Creates database independent WHERE conditions

X B Writes all WHERE conditions into a single output file

X C Must be started in the export downtime

X D Can split by number of rows

X E Cannot be used with declustering

© Copyright. All rights reserved. 215


Unit 8: Learning Assessment - Answers

8. What are the properties of the Oracle PL/SQL splitter?


Choose the correct answer.

X A In general, it is only usable if the target database is Oracle.

X B ROWID splitting must be done before the export downtime.

X C ROWID splitting is quite fast.

X D Not available for use with SAPINST

9. The WHERE splitter is used to do what?


Choose the correct answer.

X A Split R3LOAD *.STR files into smaller units

X B Create *.WHR conditions

X C Split R3TA splitter output

X D Split Oracle PL/SQL splitter output

10. The Distribution Monitor is used to do what?


Choose the correct answers.

X A Assign packages to application servers

X B Provide a central console to monitor the application servers

X C Increase or decrease the number of R3LOAD processes dynamically

X D Migrate ABAP and Java systems

11. Why does JMIGMON require logon information for the database?
Choose the correct answer.

X A It connects itself to the database.

X B It starts JLOAD, which connects to the database.

X C It starts JPKGCTRL, which connects to the database.

X D It starts JSIZECHK, which connects to the database.

© Copyright. All rights reserved. 216


Unit 8: Learning Assessment - Answers

12. The number of packages which are generated by JPKGCTL is related to which command
line parameter?
Choose the correct answer.

X A -package_files

X B -size_limit

X C -split

X D -split_rules

13. Table splitting is not required.


Determine whether this statement is true or false.

X True

X False

© Copyright. All rights reserved. 217


UNIT 9 The System Migration
Process

Lesson 1
Performing an ABAP System Migration 219

Lesson 2
Performing a JAVA System Migration 232

UNIT OBJECTIVES

● Perform an ABAP system migration


● Perform a Java system migration

© Copyright. All rights reserved. 218


Unit 9
Lesson 1
Performing an ABAP System Migration

LESSON OVERVIEW
In this lesson, you will learn how to perform an ABAP system migration.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform an ABAP system migration

Technical Migration Steps for an ABAP-Based System

Figure 140: Technical Migration Steps (ABAP-Based System)

Many migration steps can be performed in parallel in the source and target systems.
After step 3, Generate Templates for DB Size , is completed in the source system, be prepared
to start step 8, Create database in the target system . Once step 6, File transfer via FTP, tape,
USB disk, laptop is complete, steps 7-8 should already have been completed in the target
system.
In a parallel export/import scenario, the steps 4, 5, 6, 9, and 10 ran at the same time.

© Copyright. All rights reserved. 219


Unit 9: The System Migration Process

Before migration, download SAP Notes for homogeneous and heterogeneous system copy
and installation.

OS Level Preparation Steps for Technical Migration

● Set up migration file systems and directories.


● Download the current system copy tools from SAP Marketplace
● Cancel all operating system and database data backups.
● Shut down external interfaces.
● Ensure that the database is not accessed while the export is running.

Before the technical migration at a system level, run the database statistics or other
performance-relevant activities.

SAP System Level Steps for Technical Migration

● Delete unnecessary data, such as spool data and test clients.


● Release all repairs and corrections if you are changing the SAP SID.
● Check in transaction DB02 or DBACOCKPIT for missing tables or indexes.
● Suspend all jobs in the SAP system and lock users.
● Run the report SMIGR_CREATE_DDL (creates <TABART>.SQL files).
● Stop the SAP system.

To reduce the time required to unload and load the database, minimize the amount of data in
the migration source system.
Before the system copy, suspend all jobs in the source system. This prevents jobs from
running directly after the first start of the target system. Use the reports BTCTRNS1 (set jobs
into suspend mode) in the source and BTCTRNS2 (reverse suspend) in the target system.
If the target system has a new SAP SID, release all of the corrections and repairs before
starting the export.
If the database contains tables that are not in the ABAP dictionary, check whether some of
these tables also need migrating.

© Copyright. All rights reserved. 220


Lesson: Performing an ABAP System Migration

Figure 141: Technical Migration Preparation: Step 3

All SAP systems that use non-standard database objects (BI/BW, SCM/APO) require the
execution of report SMIGR_CREATE_DDL. Other system types can have non-standard
database objects as well. On Oracle for instance, the compression of tables and indexes will
trigger SMIGR_CREATE_DDL to create *.SQL contain compression statements, if the target
database is Oracle as well. We recommend running SMIGR_CREATE_DDL test-wise to check
whether it is creating *.SQL files or not. After you have called this report, do not change the
non-standard objects again before executing the report. If no database-specific objects exist,
then no <TABART>.SQL files are generated. It is important that the report completes
successfully.
You can use any file system location as the Installation Directory. SAPINST will ask for the
location. Follow the guidelines in the manual for homogeneous and heterogeneous system
copies.
Depending on the target database, additional options may be available to select in the
Database Version field.

Note:
For more information, see SAP Note: 888210 - NW 7.**: System copy
(supplementary note).

© Copyright. All rights reserved. 221


Unit 9: The System Migration Process

Figure 142: Generate *.EXT and *.STR Files

SAPINST calls R3LDCTL and R3SZCHK. The runtime of R3SZCHK depends on the version,
the size of the database, and the database type. The files *.EXT and DBSIZE.XML are
generated by R3SZCHK. The table and index sizes are written by R3SZCHK into the table
DDLOADD from where it is distributed into the *.EXT files.

Figure 143: Split *.STR Files and Tables

To improve the unload and load times, the generated *.STR and *.EXT files are split into
smaller units. Table splitting is particularly effective at reducing the export and import
runtime on large tables.

© Copyright. All rights reserved. 222


Lesson: Performing an ABAP System Migration

Figure 144: Generate Export *.CMD and *.TSK Files

MIGMON calls R3LOAD to create task files. If WHERE files exist, the WHERE conditions are
inserted into the *.TSK files. Afterwards, MIGMON generates the command files.

Figure 145: Export Database with R3LOAD

MIGMON starts a number of R3LOAD processes. A separate R3LOAD process is executed for
each command file. The R3LOAD processes write the data dump to disk. As soon as an
R3LOAD process finishes (whether successfully or due to error), MIGMON starts a new
R3LOAD process for the next command file.

© Copyright. All rights reserved. 223


Unit 9: The System Migration Process

Manual File Transfer

● FTP
● Tape
● USB storage device
● Network storage device

Caution:
Use a safe copy method.

In cases where dump files must be copied to transportable media, make sure that the files are
copied correctly. Appropriate checksum tools are available for every operating system.

Table 58: Files to Transfer to the Target System


Location Files

<export.dir> SOURCE.PROPERTIES (if existing)


<export.dir> LABEL.ASC
<export.dir>/DATA <PACKAGE>.<nnn>
<export.dir>/DATA <PACKAGE>.STR
<export.dir>/DATA <PACKAGE>.TOC
<export.dir>/DATA <TABLE>#.WHR (if existing)
<export.dir>/DB/<target_DBS> <PACKAGE>.EXT
<export.dir>/DB/<target_DBS> DBSIZE.*
<export.dir>/DB/<target_DBS> <TABART>.SQL (if existing)
<export.dir>/DB DDL<targetDBS>.TPL
<export.dir>/DB SQLFiles.LST (if existing)

Table 59: Files Not Transferred to the Target System


Location Files

<install_dir> <PACKAGE>.CMD
<install_dir> <PACKAGE>.TSK

Note:
You must transfer all of the content of the export directory to the target system.

© Copyright. All rights reserved. 224


Lesson: Performing an ABAP System Migration

The file LABEL.ASC is generated during the export of the source database. SAPINST uses the
content of LABEL.ASC to determine whether the dump data is read from the correct
directory.
SMIGR_CREATE_DDL generates the SQLFiles.LST file together with the *.SQL files.
The *.CMD and *.TSK files are generated separately for export and import, so do not copy
them.

Migration Key: Process

1. You log on to the SAP Service Marketplace using an S-User that is valid for the installation
number of the source system.

2. You use the alias migrationkey.

3. The customer accepts the migration key license agreement.

4. You select the installation number of the source system.

5. You provide migration parameters (case sensitive).

6. You check the migration key as soon as possible.

The customer must accept the migration key license agreement that you show, so you must
request the migration key from the customer.

Note:
If you have any problems, see SAP Note 338372.

Figure 146: Migration Key

© Copyright. All rights reserved. 225


Unit 9: The System Migration Process

During the migration key generation, you are asked for the classical migration or DMO
migration procedure.
The migration key is identical for all SAP Systems with the same installation number. The
migration key must match the R3LOAD version. If asked for the SAP Release, enter the
release version of the R3LOAD that you use. If you are in doubt, check the log files.
Some systems use several different host names (for example, in a cluster environment).
Generate the migration key from the node name that is listed in the (GSI) INFO section of the
R3LOAD export log (source system) and MIGKEY.log (target system).
SAPINST tests the migration key by calling R3LOAD -K (uppercase K). The file MIGKEY.log
contains the check results.

Note:
For more information, see SAP Note 338372: Migration key does not work.

Install the SAP Instance Software

● Use the current SAPINST from the downloaded Software Provisioning Manager media.
● Install the latest SAP kernel.

Install the Database Software

● Use the current SAPINST from the downloaded Software Provisioning Manager media
● Install the latest database patches.

Note:
For the latest SAP kernel and database patches, check the approved SAP product
platform and release combinations by examining the product availability matrix
(PAM).

Determine the Database Configuration

● Configure the target database size (adapt DBSIZE.XML)


● Make sure that there is additional space for database growth
● Configure database parameters for import

Considerations When Creating a Database

● The process can be time-consuming when large databases are involved.


● The database creation should take place before starting the export of the source database.

Be generous in your database sizing during the first migration test run. The experience gained
through the test migration is better than any estimate that you calculate in advance, and you
can always adjust the values in subsequent tests.

© Copyright. All rights reserved. 226


Lesson: Performing an ABAP System Migration

Figure 147: Generate Import *.CMD and *.TSK Files

MIGMON calls R3LOAD to create task files. If WHERE files exist, the WHERE conditions are
inserted into the *.TSK files. Afterwards MIGMON generates the command files.

Figure 148: Import Data with R3LOAD

MIGMON starts the import R3LOAD processes.

Technical Post-Migration Activities


The homogeneous and heterogeneous system copy guides, and their respective SAP Notes,
describe the general follow-up activities.

© Copyright. All rights reserved. 227


Unit 9: The System Migration Process

Technical Post-Migration Activities: OS Level

● Check the DIPGNTAB log file for errors


● Copy external SAP system files (for example, job logs, archives, external spool files, and
interface data)
● Set up access to the transport directory
● Set up external interfaces
● Perform a file system backup

After the import is completed, SAPINST starts the program DIPGNTAB to update the
imported NAMETAB tables. The SAP table field order and field lengths from the source
system will be updated with the field order and field lengths found on the target database. The
created log file dipgntab[<SAPID>].log should be examined. The log summary at the end of
the file, must not report any errors.

Technical Post-Migration Activities: DB Level

● Execute a database backup


● Perform a database restore test
● Update statistics or perform other performance-relevant activities
● Adjust the database parameters
● Analyze the DB fill level
● Set up additional indexes if necessary (see relevant SAP Notes)
● Delete old SAP system monitor and statistics data

In many cases, changing a database system includes changing the backup mechanism. Make
sure that you are familiar with the changed or new backup and restore procedures.
After the migration, you can delete the SAP System statistics and backup information for the
source system from the target database. For a list of the tables, see the system copy guide.

Technical Post-Migration Activities: SAP System Level

● Ensure that SAPINST executed the RFC reports successfully


● Check the SAP system installation (transaction SICK)
● Perform an ABAP dictionary and database consistency check (DB02/DBACOCKPIT)
● If the SAPSID has changed, initialize the transport system (SE06)
● Configure the transport management system (STMS)
● Regenerate ABAP loads (SGEN)
● Schedule data backups (DB13)
● Adjust all printer definitions (SPAD)
● Adjust RFC destinations (SM59), profiles (RZ10), and operation modes (RZ04)
● Release suspended jobs (SE38 BTCTRNS2), if necessary change the batch server name

© Copyright. All rights reserved. 228


Lesson: Performing an ABAP System Migration

● Install the SAP system license (SLICENSE)

To finalize the system copy, SAPINST executes various jobs in the SAP target system using
RFC (as user DDIC in client 000). Always make sure that SAPINST has executed all RFCs
without error and finished properly.
The jobs that were suspended in the source system with report BTCTRNS1 can be released on
the target system by running report BTCTRNS2.

Technical Post-Migration: Running RS_BW_POST_MIGRATION

● Connect all data source systems to the migrated system


● If changing the database system, ensure that the connect SAP data source system are
modifiable
● Run the report in the background (it can take several hours)
● Check the spool and job protocols carefully

The nonstandard database objects (mainly BW objects) that were identified on the source
system and are re-created and imported into the target system, need some adjustments. The
report RS_BW_POST_MIGRATION performs the necessary changes. With a database
migration, RS_BW_POST_MIGRATION connects to each SAP data source system to
regenerate the transfer rules. Make sure that the connected systems are in a modifiable state,
otherwise the transfer rules cannot be generated.

Note:
For more information, check the "Follow-Up Activities" chapter in the respective
system copy guide.

Figure 149: Technical Post Migration Activities: Step 5

© Copyright. All rights reserved. 229


Unit 9: The System Migration Process

The report variants SAP&POSTMGRDB (DB changed), SAP&POSTMGRHDB (DB changed to


SAP HANA) and SAP&POSTMGR (DB not changed) are provided.
The steps in the report RS_BW_POST_MIGRATION are as follows:
● Reset time stamps for BW programs:
Invalidates previously generated programs to ensure that every program is regenerated
according to the new database needs.
● Create DBDIFF entries:
CREATE appropriate DBDIFF entries
● Adapt Basis Cube Indexes:
Runs CHECK_INDEX_STATE
● Adjust indexes for aggregates in DDIC:
Runs CHECK_INDEX_STATE.
● Create new PSA version:
Runs method activate_all_ds (activate all data sources)
● Drop temp. tables:
Runs SAP_DROP_TMPTABLES
● Re-create fact views:
Runs factviews_recreate
● DB specific steps:
Runs database-specific tasks (if defined for current database)
● Only for selected cube:
Restrict index adjustments for cubes/aggregates and DB specific steps to a selected cube
only

Regeneration Methods for ABAP Loads


ABAP loads are OS-dependent. Use the appropriate method to regenerate them on the
migration target system.

● Transaction SGEN

Note:
The table containing the generated ABAP loads will not export/import with the
R3LOAD system copy method.

The table in the SAP0000.STR file contains the generated ABAPs (ABAP loads) of the SAP
System. These loads are no longer valid. For this reason MIGMON does not export the table.
Each ABAP load is generated automatically the next time a program is called. The system is
slow unless all commonly used programs are generated. Use transaction SGEN to regenerate
the ABAPs.

© Copyright. All rights reserved. 230


Lesson: Performing an ABAP System Migration

General Tests

● Set up a test environment.


● Run reports to compare results to those in the source system.
● Test typical transactions from day-to-day business.
● Perform a runtime test of critical reports and transactions.
● Run performance tests under a heavy load.
● Verify communication with external systems.
● Create a cut over plan for the final migration.

Hint:
Involve end users in the test process.

Take care when setting up the test environment. To prevent unwanted data communication
to external systems, isolate the system. External systems do not distinguish between
migration tests and production access. An existing checklist from a previous upgrade or
migration can be a valuable source of ideas when you develop a cut over plan. To identify any
differences between the original and the migrated system, involve end users as soon as
possible.

LESSON SUMMARY
You should now be able to:
● Perform an ABAP system migration

© Copyright. All rights reserved. 231


Unit 9
Lesson 2
Performing a JAVA System Migration

LESSON OVERVIEW
In this lesson, you will learn how to perform a JAVA system migration.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform a Java system migration

Technical Migration Steps: Java-Based System

Figure 150: Technical Migration Steps Java-Based System

Technical Migration Preparation: SAP Notes


Just before you start the migration, check all the migration-related SAP Notes for updates on
the following subjects:

● Heterogeneous and homogeneous system copy


● Installation

© Copyright. All rights reserved. 232


Lesson: Performing a JAVA System Migration

● Support package stacks and installed software components supported by the migration
tools

Figure 151: Generate Template for Target DB Size

Collect Application Data from File System

● SAPINST starts SAPCAR to archive application-specific data stored in the file system.
● As a prerequisite, SAPINST must recognize the application.
● Various *.SAR files can be created.

Note:
This is not required in SAP releases based on 7.10 and later.

If SAPINST does not recognize the application and its related files, archives are not created.
Applications that are not recognized by SAPINST may require operating system-specific
commands to copy the respective directories and files to the target system. For instructions
on how to copy these applications, see the corresponding SAP Notes.

Collect SDM Data

● SAPINST calls the software deployment manager (SDM) to collect deployed file system
software components.
● Read the SDM repository (list of deployed components).
● Put file system components into the SDMKIT.JAR file.

© Copyright. All rights reserved. 233


Unit 9: The System Migration Process

Note:
This is not required in SAP releases based on 7.10 and later.

In SAP releases below 7.10, the SDM repository is installed in the file system and is
redeployed into the target system from the SDMKIT.JAR file.

Figure 152: JPKGCTL (JSPLITTER)

JPKGCTL packaged job files can contain multiple tables. They are named EXPORT_<n>.XML.
Job files for a single table are named EXPORT_<n>_<TABLE>.XML. If a table was split, the
resulting job files are named the same but with a different number each. The export and
import job files are generated at the same time. They share the same job name but using the
file name prefix EXPORT or IMPORT. The sizes.xml file contains the expected export size of
each generated packaged job file and helps JMIGMON to export/import the packages by size.
The largest package is exported/imported first, then the next smaller packages, and so on.

© Copyright. All rights reserved. 234


Lesson: Performing a JAVA System Migration

Figure 153: Export Database with JLOAD

Note:
In versions without JPKGCTL, JLOAD generates the EXPORT.XML and
IMPORT.XML by itself.

Table 60: File Transfer


Location Content

<export_dir> SOURCE.PROPERTIES
<export_dir> LABEL.ASC
<export_dir> LABELIDX.ASC
<export_dir>/APPS LABEL.ASC
<export_dir>/APPS/... Application specific
<export_dir>/DB/<target_DBS> DBSIZE.XML
<export_dir>/JDMP EXPDUMP.<nnn> / EXPDMP_<PACKAGE>.<nnn>
<export_dir>/JDMP sizes.xml (if it exists)
<export_dir>/JDMP LABEL.ASC
<export_dir>/JDMP IMPORT[_<PACKAGE>].XML
<export_dir>/JDMP EXPORT[_<PACKAGE>].XML
<export_dir>/SDM SDMKIT.JAR
<export_dir>/SDM LABEL.ASC
<export_dir>/SEC SEC.SAR

© Copyright. All rights reserved. 235


Unit 9: The System Migration Process

Location Content

<export_dir>/TOOLS SLTOOLS.SAR
<export_dir>/... Others

The LABELIDX.ASC file and the LABEL.ASC files are generated during the export of the
source database. SAPINST uses the content of these files to determine whether the load data
is read from the correct directory. The <export_dir>/APPS directory may be empty if, for
example, no applications are installed. Those applications that keep their data in the file
system. It is also possible that the application is unknown by SAPINST.
The SEC.SAR file contains the SecStore.properties. The SLTOOLS.SAR file contains the Java
system copy tools like: JLOAD, JSPLITTER, JMIGMON, JMIGTIME, and so on. The tool user
guides are provided as PDF documents.
Additional directories and files created in the export directory are named “Others” in the table
File Transfer.

Install DB/SAP Software and Extended Database: Standalone Java installation


Note the following about a standalone Java installation:

● Adapt DBSIZE.XML
● Install an SAP Java instance
● Install database software
● Apply the latest database patches

Install SAP Software and Extended Database: Java Add-In Installation


Note the following about a Java add-in installation:

● Adapt DBSIZE.XML
● Install an SAP Java instance

Note:
The Java Add-In installation is done in a single step with necessary system
copy activities together with the ABAP system installation and import.

SDM Repository Reinstallation


Note the following regarding SDM reinstallation:

● SAPINST calls the SDM to reinstall (deploy) its file system components including the SDM
Repository from the SDMKIT.JAR.

Note:
Since SAP release NetWeaver 7.10, the SDM is not longer used.

© Copyright. All rights reserved. 236


Lesson: Performing a JAVA System Migration

The SDM holds its repository in the file system. For a reinstallation, the content of SDMIT.JAR
is used, which contains the necessary file system components as collected in the source
system.

Figure 154: Import Database with JLOAD

Note:
In versions without JPKGCTL, JLOAD generates the EXPORT.XML and
IMPORT.XML by itself.

Restore Application Data to the File System

● SAPINST automatically extract the collected application data SAPCAR archives into the
respective directory structures of the target system.
● Some SAPCAR archives may need to be copied manually to the respective directory
structures if mentioned in the system copy guide or SAP Notes.
● Copy the application-specific data manually that was not collected on the source system
because the application was not known by SAPINST.

Technical Post-Migration Activities


The following activities are examples of follow-up activities:
● Install the SAP license key (if required).
● Change the passwords.
● Adjust RFC configuration of the SAP Java connector.
● Generate new public-key certificates.
● Perform component-specific follow-up activities.

© Copyright. All rights reserved. 237


Unit 9: The System Migration Process

● Update statistics or perform other performance-relevant database activities.


● Run a full installation backup.

The homogeneous and heterogeneous system copy guides and their respective SAP Notes
describe the general follow-up activities.
JAVA systems that include components that connect to an ABAP backend using the SAP
JAVA Connector (SAP Jco), for example SAP BW or SAP Enterprise Portal, need to maintain
the RFC destination. After system copy, the public-key certificates are invalid on the target
system and need to be reconfigured. Component-specific follow-up activities for SAP BW,
Adobe Document Services, SAP Knowledge Warehouse, SAP ERP, SAP CRM.

Post-Migration Test Activities

● Set up a test environment.


● Compare program results between the original and copied system.
● Test typical processes from day-to-day business.
● Perform runtime tests of critical functions.
● Run performance tests (under heavy load).
● Verify communication with external systems (carefully, as a test).
● Create a cut over plan for the final migration.

Note:
Involve end users in the test process.

Take care when setting up the test environment. To prevent unwanted data communication
to external systems, isolate the system. External systems do not distinguish between
migration tests and production access. To identify any differences between the original and
the migrated system, involve end users as soon as possible.

LESSON SUMMARY
You should now be able to:
● Perform a Java system migration

© Copyright. All rights reserved. 238


Unit 9

Learning Assessment

1. In an ABAP system migration, which program is generating the TSK file?


Choose the correct answer.

X A R3LDCTL

X B MIGMON

X C R3LOAD

X D SAPINST

2. In a Java System migration, what is the purpose of the sizes.xml file?


Choose the correct answer.

X A To define the JAVA target database layout for SAPINST

X B To perform table splitting for JSPLITTER

X C To define the JLOAD export package order in JMIGMON

X D To define the packages split sizes for JPKGCTL

© Copyright. All rights reserved. 239


Unit 9

Learning Assessment - Answers

1. In an ABAP system migration, which program is generating the TSK file?


Choose the correct answer.

X A R3LDCTL

X B MIGMON

X C R3LOAD

X D SAPINST

2. In a Java System migration, what is the purpose of the sizes.xml file?


Choose the correct answer.

X A To define the JAVA target database layout for SAPINST

X B To perform table splitting for JSPLITTER

X C To define the JLOAD export package order in JMIGMON

X D To define the packages split sizes for JPKGCTL

© Copyright. All rights reserved. 240


UNIT 10 Troubleshooting

Lesson 1
Troubleshooting Migration Problems 242

UNIT OBJECTIVES

● Troubleshoot common migration problems


● Identify and correct errors

© Copyright. All rights reserved. 241


Unit 10
Lesson 1
Troubleshooting Migration Problems

LESSON OVERVIEW
In this lesson, you will learn how to troubleshoot common migration problems and identify
and correct errors.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Troubleshoot common migration problems
● Identify and correct errors

R3LOAD Terminations
If you encounter unexpected terminations, check that the migration tools are compatible with
the current SAP system and database version, check that the passwords are correct, and that
the necessary changes to the root user or the environment are made before starting
SAPINST.

R3LOAD: Unload Terminations


The following list shows some of the most frequent causes of unload terminations:

● No connection to database
● Incorrect or missing environment variables
● Inconsistencies between the ABAP dictionary and the database dictionary
● Insufficient storage space in the export file system
● Insufficient temporary database space for sorting

Missing or incorrect variable settings in the C-shell environment might lead to DB connection
problems with R3LDCTL, R3SZCHK, or R3LOAD. This can be the case on SAP systems where
only a non C-shell environment is used. When the system experiences such failures, test the
database connection by logging on with <SID>adm, running the C-Shell as login shell, and
execute R3LOAD -testconnect. Adjust the environment settings as required.
Check transaction DB02 or DBACOCKCPIT for missing objects on the database or ABAP data
dictionary. Determine how to deal with the missing objects before starting the export.
Remove QCM tables having no restart protocol in SE14 (invalid temp tables) before starting
the export. These tables can lead to export and import errors. Import errors are mostly of
type duplicate key.
Provide enough free disk space in the export file system. In general, assume a dump file size
will use 10% - 15% of the space in the source database. For compressed databases double
the free space. Export errors caused by insufficient free disk space can cause dump file
corruptions.

© Copyright. All rights reserved. 242


Lesson: Troubleshooting Migration Problems

With a sorted export, make sure to have sufficient temp space available in your database (for
example, when exporting Oracle: PSAPTEMP). If there are free space related errors, increase
the temp space, reduce the number of parallel running jobs, or consider a unsorted export.

Note:
For more information, see SAP Note 9385: What to do with QCM tables.

R3LOAD: Load Terminations


The following list shows some of the most frequent causes of load terminations:

● No connection to the database


● Incorrect or missing environment variables
● Database size is insufficient
● Insufficient temporary database space for sorting
● Insufficient permissions on import files and directories

Database connection problems can be caused by the same issues as described for export
terminations. Since the target system is normally installed from scratch, connection
problems are unlikely.
If there is not enough temporary database disk space for index creation, increase the
database storage units that are used for sorting (for example, PSAPTEMP for Oracle) or
reduce the number of parallel running R3LOAD processes.
Ensure that R3LOAD can access the directories and files of the import file system.

Useful R3LOAD Environment Variables

● R3LOAD warning level


- R3LOAD_WL=1
- Enhanced logging of R3LOAD activities
- Includes list of environment variables of the started R3LOAD
● R3LOAD trace level
- R3LOAD_TL=1...4
- DBSL communication trace

The R3LOAD warning level can have any value, as long as something is set. The R3LOAD
trace level is forwarded to the DBSL interface only. Useful values are between 1 and 4. The
higher the value, the more output is written. The trace can provide valuable information for
troubleshooting.

Note:
Only use these environment variables for problem analysis. The amount of log
data can quickly fill your install file system.

© Copyright. All rights reserved. 243


Unit 10: Troubleshooting

Note:
The warning and trace level can be set for programs like R3LDCTL, R3SZCHK,
and R3TA as well. Set the environment variable as follows: <PROGRAM
name>_WL=1 and <PROGRAM name>_TL=1 ... 4. The warning and trace level
functionality is not available in all program versions, except for R3LOAD.

Figure 155: Useful R3LOAD Environment Variables: Example

The R3LOAD warning level is useful in cases where files cannot be found or opened without an
obvious reason. The list of environment variables contained in the R3LOAD log file can assist
you in the analysis of database connection problems caused by incorrect environment
settings. More information is written than usual.

Figure 156: R3LOAD: Load Termination Example

In a load termination example, the initial situation is as follows:

© Copyright. All rights reserved. 244


Lesson: Troubleshooting Migration Problems

1. Table ATAB has been created successfully.

2. DB2 SQL error 551 occurs during an INSERT to table ATAB.

Error text:
Database user SAPSR3 is not authorized to perform the INSERT.
R3LOAD response:
The R3LOAD processing package SAPPOOL.STR cannot proceed due to the SQL error.
R3LOAD ends with a negative return code.
To fix the problem, grant access authorization for table ATAB to user SAPSR3.

R3LOAD Restart Example

Figure 157: R3LOAD Restart Example

The figure shows the following scenario:

1. R3LOAD reads the first task of status err or xeq from the SAPPOOL.TSK file. The error
occurred during the load, and not while creating the table, so the table contents must be
removed first. To do this, R3LOAD executes the truncate/delete SQL statement, which is
defined in the DDL<DBS>.TPL file.

2. Restart complete. Data is loaded.

Duplicate Key at Import Time

● If the database returns a duplicate key error while creating a primary key or unique index,
R3LOAD stops on error.

Possible Reasons for Duplicate Key Error


Duplicate key errors can be caused by the following reasons:

● R3LOAD restart problems during export or import


● No unique key on the source database table
● BW object involved

© Copyright. All rights reserved. 245


Unit 10: Troubleshooting

● Corrupted primary key on the source database


● External programs inserted data using trailing blanks to fill field

Some tables have a primary key in the ABAP DDIC, but not on the source database. This may
be intentional, so ask the customer if there is a reason, and check for SAP Notes.
No *.SQL file was found by R3LOAD, or SMIGR_CREATE_DDL was not called on the source
system.
In the source database, silently corrupted primary keys can lead to duplicate exported data
records. In this case, the number of records in the *.TOC file is larger than the number of rows
in the source table. Verify or repair the primary key and export the table again.
SAP ABAP Systems do not write data with trailing blanks into table fields (just to fill them up).
External programs that write directly into SAP System tables may insert trailing blanks for
filling a field completely. R3LOAD always exports table data without trailing blanks. If same
data exists in the source database with and without trailing blanks (because the SAP system
modified the data), duplicate key errors occur at import time. In this situation, clean up the
source table and export again.

Power Failure or OS Crash at Export Time

● After a power failure, an operating system crash, or an abnormal termination of R3LOAD


processes, the *.TOC, data dump files and *.TSK files that are active at the time of
termination, may become inconsistent because the operating system or R3LOAD was not
able to write or flush the correct information to disk.

Symptoms at Load Time

● Duplicate key, decompression, or check sum error messages


● More rows loaded into table than recorded in the *.TOC file
● Data type violations because of unstructured data

In an OS crash or power failure, it is difficult to ascertain which OS buffers were flushed to the
open files, or what happened to the entire file. The example in the list describes a rare but
possible situation.
Exports into a network mounted file system can lead to similar symptoms if the network
connections breaks. Abnormal terminations are caused by external events, not by a database
error, or file permission problem.

© Copyright. All rights reserved. 246


Lesson: Troubleshooting Migration Problems

Figure 158: Power Failure or OS Crash at Export Time: Small Tables

The figure Power Failure or OS Crash at Export Time: Small Tables shows a power failure or
operating system crash. In this example, the OS is unable to flush all of the file buffers to disk,
so a mismatch occurs between the dump file and the *.TOC or *.TSK file content. R3load
exports TABLE08, and the *.TOC or *.TSK is updated, but the data is not yet written by the
OS into the dump file. The *.TOC file contains block 48, as the last data block in the dump file.
After restarting the export process, R3LOAD looks at the *.TSK file to get the last exported
table, which is TABLE08. It then reads the *.TOC file for the last write position. R3LOAD opens
the dump file and does a search to the next write position at block 49 (which is behind the end
of the file in this case). The next table to export is TABLE09, which will be stored in the dump
file starting at block 49. The gap between block 42, last physical write, and block 49, contains
random data.
It is unclear whether the *.TSK file contains more or fewer entries than the corresponding
dump file, so using the merge option might be risky. This problem is rare but it can happen to
small tables that are completely exported with a few blocks that need to be flushed to the
dump file; however, the action was never completed because the system crashed. The entries
in the *.TOC or *.TSK files were previously written.

© Copyright. All rights reserved. 247


Unit 10: Troubleshooting

Figure 159: Power Failure or OS Crash at Export Time: Large Tables

The figure Power Failure or OS Crash at Export Time: Large Tables shows a power failure or
operating system crash. In this example, the OS is unable to flush all of the file buffers to disk,
so a mismatch occurs between the dump file and the *.TOC or *.TSK file content. R3LOAD
finished the export of the large TABLE02 , and the *.TOC or *.TSK is updated, but the last 8
data blocks are not yet written by the OS into the dump file. The *.TOC file contains block
320.000 as the last valid data block in the dump file.
After restarting the export process, R3LOAD looks into the *.TSK file to find the last exported
table, which is TABLE02, then it is reading the *.TOC file for the last write position. R3LOAD
opens the dump file and does a seek to the next write position block 320.001 (which is behind
the end of the file in this case). The next table to export is TABLE03, which will be stored in the
dump file starting at block 320.001. The gap between block 319.992 (last physical write) and
320.001 contains random data.
It is unclear whether the *.TSK file contains more or fewer entries than the corresponding
dump file, so the use of the merge option might be risky. This problem is rare but it can
happen to tables that are exported completely where only the last blocks have to be flushed to
the dump file, and the entry of *.TOC or *.TSK files are written before.

Solution
Use the following solution when there is a power failure or OS crash when exporting large
tables:

● Repeat the export of all packages that were unloading when the system crashed.
● Remove the corresponding *.LOG, *.CMD, *.TSK, *.BCK, *.TOC, and dump files.
● Restart SAPINST and MIGMON.

© Copyright. All rights reserved. 248


Lesson: Troubleshooting Migration Problems

Note:
In this situation, using the merge option is risky because it is unclear whether
the *.TSK file contains more or fewer entries than the corresponding dump file.

Only remove files for packages that were not completed at the time of the system crash. We
do not recommended the use of the merge option. The export of all involved packages should
be repeated.

R3LOAD – Export Error Due to Space Shortage

● The problem:
- A space shortage in the file system causes the export to stop but after increasing the
space on the file system, the export restarts without problems.
- The import stops on RFF, RFB, checksum, or cannot allocate buffer errors.
● The reason:
- At the first export termination, the *.TOC and *.TSK files were updated before the dump
file was written.
- The *.TOC file provided a dump file seek position for the restarted export process
behind the end of file and continued writing from there.
● The solution:
- Use the same procedure as for OS crashes or power failures during export

Figure 160: R3LOAD Export Error Due to Space Shortage: Example

The figure R3LOAD Export Error Due to Space Shortage: Example shows unload terminations
due to a shortage of space in the file system. After increasing the space of the file system, the
export restarts and finishes without further problems. At import time, R3LOAD stops with a
checksum error.

© Copyright. All rights reserved. 249


Unit 10: Troubleshooting

Export Rules

● Ensure that there is sufficient disk space available for the data dump.
● When using NFS, check SAP Note 2093132 for safe mount options.
● Run MIGMON in the background if running standalone.
● If you are unsure whether to restart or repeat the export, then it is best to repeat the entire
export process.

The R3LOAD export process is sensitive, so be sure to prevent any disturbance.


If somebody else performed the R3LOAD export and you are only responsible for the import,
make sure that you receive the export logs along with the export dump files. In import errors
caused by dump corruptions, examine the export logs for troubleshooting.

Power Failure or OS Crash at Import Time

● The problem:
After a power failure or operating system crash, the *.TSK files that were active at the time
of the termination may be inconsistent because the operating system was unable to flush
the file buffers in time.
● The symptom:
R3LOAD attempts to restart the import, but encounters another error as soon as it tries to
process the next table.

Figure 161: Power Failure or OS Crash at Import

In the case of an OS crash or a power failure, it is difficult to ascertain which OS buffers were
flushed to the open files or what happened to the entire file.
In the figure Power Failure or OS Crash at Import, a power failure or operating system crash
occurred. The operating system could not flush all of the file buffers to disk, so a mismatch
occurred between database content and the *.TSK file. R3LOAD imported TABLE08, but the
*.TSK file only contains information regarding TABLE05 and its primary key.

© Copyright. All rights reserved. 250


Lesson: Troubleshooting Migration Problems

Solution
Use the following solution when there is a power failure or OS crash at import:

● Add option -merge_bck to to the R3LOAD command line parameters in the


import_monitor_cmd.porperties file.
● Restart SAPINST and MIGMON.

It is necessary to merge *.TSK and *.BCK. All entries that are not marked as executed (xeq)
are set to error (err). The described problem only applies to small tables.

Duplicate Key Problem after Restarting Import

● The problem:
- In rare cases, an attempt to delete the data that has already been loaded fails during a
restart.
● The symptoms:
- The creation of the primary key fails with database message Duplicate key .
- The number of table rows is larger than recorded in the corresponding
SAP<TABART>.TOC file.

If the data deletion fails for a large table when restarting an import, R3LOAD assumes that the
table is empty and starts the data load. The creation of the primary key stops with a duplicate
key error.

Solution
Use the following solution when there is a duplicate key problem after restarting import:

● Change the contents of the task file by hand and edit the affected <PACKAGE>.TSK file.
Change:
D <Table Name> I ok
P <Key Name> C err

To:
D <Table Name> I err
P <Key Name> C err
● If you do not perform this modification, the restart always starts at the Create Primary Key
command.

Corrupted R3LOAD Dump Files

● The problem:
- R3LOAD stops during import. Various error messages may appear.
● The symptoms (error messages):
- (RFF) ERROR: SAPxxx.TOC is not from same export as SAPxxx.001
Data block corrupted or files of different exports are mixed

© Copyright. All rights reserved. 251


Unit 10: Troubleshooting

- (RFF) ERROR: buffer (...KB) too small


Data block corrupted, random buffer size read from dump file
- (RFB) ERROR: CsDecompr rc= –11
Data block corrupted, block decompression error
- (RFB) ERROR: wrong checksum – invalid data
Data block corrupted

RFF=Read from file.


RFB=Read from buffer.
(RFF) ERROR: SAPxxx.TOC is not from same export as SAPxxx.001.
The dump file is corrupted, or files of different exports have been accidentally mixed.
(RFF) ERROR: buffer (... KB) too small (the figure is larger than 10.000).
R3LOAD read an invalid buffer size from the dump file. The buffer is used to load some data
blocks to uncompress it. Typical buffer sizes range only a few MB.
(RFB) ERROR: CsDecompr rc=11 Buffer data cannot be decompressed..
(RFB) ERROR: wrong checksum – invalid data.
The checksum of loaded data blocks is wrong.

Note:
For more information, see SAP Notes 143272: R3LOAD (RFF) ERROR: buffer
(xxxxx kB) to small; and 438932: R3LOAD Error during restart (export).

Corrupted R3LOAD Dump Files

● The reasons:
- The dump file was corrupted either during the export, or during the file transfer, or
there were read errors when accessing the dump file via NFS.
● The solutions:
- Compare original dump file against the transferred one
- If the files are different, copy the original file once more (in a safer way)
- If the files are identical, export the package once more and copy again

If the source and the target files are identical, then the contents are corrupt.
To analyze the reasons for corrupted dump files, check the export log files. There may be an
error during the export or a restart situation. Use different checksum tools to compare
original and copied files, or use different algorithms, if possible.

SICK – System Installation Check Failed


Transaction SICK detects errors that indicate an incorrect SAP Basis installation. Transaction
SICK reports one or more of the following errors:

© Copyright. All rights reserved. 252


Lesson: Troubleshooting Migration Problems

● Incorrect SAP Basis installation


● Required file systems or directories do not exist
● Missing or wrong kernel files
● Wrong file permissions

ABAP DDIC/Database Consistency Check


As soon as the target system is available for login, run the ABAP DDIC/ Database Consistency
Check from transaction DB02 or DBACOCKPIT.

Tables Missing in the Database

● Do the tables exist in the source system?


● Are the tables contained in the R3LOAD control files?
● Was there an error during the split of *.STR files?
● Was there an error during the file transfer?
● Was there an error during the import?
● Do error messages appear in the *.LOG files?
● Were the tables added to a negative list?
● Are the tables database-specific and do they require generation by any tool?

R3LOAD – Load Termination Due to Database Space Shortage

● All the values for the expected database size calculated by R3SZCHK are estimates.
● It is often necessary to make enlargements to the computed target database size in the
DBSIZE.XML file.

Be generous with database space during the first test migration (allow automatic grow up to
20%). Adjustments for the production migration can be determined from the results of the
test migrations.

Data/Index in Wrong Database Storage Unit

● Problem
- After a R3LOAD system copy, you realize that tables or indexes have been loaded into
the wrong database storage unit.
● Check
- Is the TABART database storage unit assignment correct in the source system (tables
TA<DBS>, IA<DBS>, TS<DBS>)?
- Have the relevant tables and indexes been assigned to the correct TABART in the
source system (table DD09L)?
- Does the DDL <DBS>.TPL file contain the correct TABART/tablespace mapping?
- Do the *.SQL files contain the right TABART names for the object?

© Copyright. All rights reserved. 253


Unit 10: Troubleshooting

In many cases, tables and indexes are moved to database storage units created by
customers, without maintaining the appropriate ABAP dictionary tables. The result is that the
files DDL<DBS>.TPL. and *.STR contain the original SAP TABART settings. Check the *.CMD
files to find out which DDL<DBS>.TPL file has been used. Oracle databases running on the old
tablespace layout are automatically installed with the reduced tablespace set on the target
system.

JLOAD Export and Import Restart Behaviors


If a table failed to export, it does not make sense to continue with another table, so JLOAD
stops immediately if an error occurs during export.
If an import of a certain object failed, it is often possible to continue with another one, so
JLOAD does not stop if errors occurs during import.

JLOAD Export Failure and Restart Behavior


Note the following about JLOAD export failure and restart:

● Export Failure
- Terminate if data of a certain object cannot be exported.
- A stop-on-error strategy is used.
● Export Restart
- The existence of an EXPORT[_<PACKAGE>].STA file flags a restart situation.
- Items are read from EXPORT[_<PACKAGE>].STA.
- Exported objects with the status OK are skipped. The first item with an error status is
used to restart with.
- Export continues by writing the next header for metadata or data.

JLOAD Import Failure and Restart Behavior


Note the following about JLOAD import failure and restart:

● Import Failure
- If a database object cannot be created or data cannot be loaded, JLOAD tries to
proceed with the next object.
- A continue-on-error strategy is used.

Note:
Even if a table fails to import, it makes sense to save time by proceeding
with other tables. A later restart only deals with the erroneous objects.

● Import Restart
- The existence of an IMPORT[_<PACKAGE>].STA file flags a restart situation.
- Items are read from IMPORT[_<PACKAGE>].STA.

© Copyright. All rights reserved. 254


Lesson: Troubleshooting Migration Problems

- Imported objects with the status OK are skipped. The first item with an error status is
used to restart.
- In the database, the erroneous item is cleaned up by deleting loaded data or dropping
the database object.

If the last unsuccessful action was to load data, the restart action is to delete data. If the last
unsuccessful action was to create an object (table or index), the restart action is to drop a
table or index.

LESSON SUMMARY
You should now be able to:
● Troubleshoot common migration problems
● Identify and correct errors

© Copyright. All rights reserved. 255


Unit 10

Learning Assessment

1. If R3LDCTL cannot connect to the database when started by SAPINST in UNIX, you should
check which of the following issues?
Choose the correct answers.

X A Check that passwords are correct.

X B Check that the <SID>adm database environment settings for the C-Shell are
correct.

X C Check whether there is enough space in the export directory

2. If an R3LOAD process fails to create the secondary index of a table, what is the restart
activity when running R3LOAD once more?
Choose the correct answer.

X A Drop table, create secondary index

X B Drop primary key, create secondary index

X C Drop secondary index, reload data

X D Drop secondary index, recreate secondary index

© Copyright. All rights reserved. 256


Unit 10

Learning Assessment - Answers

1. If R3LDCTL cannot connect to the database when started by SAPINST in UNIX, you should
check which of the following issues?
Choose the correct answers.

X A Check that passwords are correct.

X B Check that the <SID>adm database environment settings for the C-Shell are
correct.

X C Check whether there is enough space in the export directory

2. If an R3LOAD process fails to create the secondary index of a table, what is the restart
activity when running R3LOAD once more?
Choose the correct answer.

X A Drop table, create secondary index

X B Drop primary key, create secondary index

X C Drop secondary index, reload data

X D Drop secondary index, recreate secondary index

© Copyright. All rights reserved. 257


UNIT 11 Special System Copy
Procedures

Lesson 1
Performing Unicode Conversions 259

Lesson 2
Performing a Near Zero Down Time System Copy 262

Lesson 3
Describing SAP HANA Storage 271

Lesson 4
Performing a HANA Migration 274

Lesson 5
Performing a Combined Upgrade and Database Migration to SAP HANA 292

UNIT OBJECTIVES

● Perform Unicode conversions


● Describe the Near Zero Down Time system copy methodology
● Describe SAP HANA storage
● Perform an SAP HANA migration
● Perform combined upgrade and database migration to SAP HANA

© Copyright. All rights reserved. 258


Unit 11
Lesson 1
Performing Unicode Conversions

LESSON OVERVIEW
In this lesson, you will learn how to perform Unicode conversion.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform Unicode conversions

Unicode Conversion Notes


Unicode Conversion Information Sources
The following sources have additional information about Unicode conversions:

● SAP Service Marketplace (SMP)


- http://service.sap.com/unicode
● SAP Community Network (SCN)
- http://scn.sap.com/community/internationalization-and-unicode
- http://www.sdn.sap.com/irj/sdn/unicode-conversion
● SAP Notes
- 2033243 — End of non-Unicode Support: Release Details
- 1322715 — Unicode FAQs
- 1319517 — Unicode Collection Note

Unicode conversions are available starting with NetWeaver 6.20. The best way to begin
learning about the Unicode conversion thematic is to read SAP Note 1322715, Unicode FAQs.
This note provides current links to Unicode articles and presentations.
The Unicode Collection Note provides a list of information about potential problems and their
solution. The SAP Notes mentioned in the collection note are release-dependent and do not
apply to all SAP system versions.
SAP announced an end of non-Unicode support. SAP Note 2033243 describes which SAP
system version is the last non-Unicode system and the last supported SAP kernel.

R3LOAD Unicode Conversion


R3LOAD Unicode Conversion
Note the following about R3LOAD conversion:

© Copyright. All rights reserved. 259


Unit 11: Special System Copy Procedures

● The Unicode conversion uses SAPINST and R3load to export the source and to import the
target system.
● There are special requirements for preparations in the source system.
● There is a minimum support package level requirement.
● The code page is always converted at export time.
● The CPU, memory, and disk storage consumption increase after the conversion.
● A Unicode conversion can be combined with a heterogeneous system copy.

Note:
Check the current status and release strategy of the Unicode conversion in the
SAP Service Marketplace. See the quick links UNICODE, UNICODE@SAP, and
the appropriate SAP Notes.

● An upgrade and Unicode conversion can be executed at the same time (target systems
below SAP NetWeaver 7.50).

Unicode SAP Systems require SAP NetWeaver Application Server 6.20 and above.
The Unicode Conversion is only applicable if a minimum Support Package Level is installed.
Check the Unicode Conversion SAP Notes for more information.
R3LOAD converts the data to Unicode while the export is running. Since it is not sufficient for
R3LOAD to read the raw data, additional features are implemented regarding the data
context, which available in the source system only. Specific Unicode preparation steps delete
obsolete data, fill various control tables, and generate the Unicode Nametab, before the
export can take place.

Note:
A Unicode conversion at import time is not supported for customer systems.

A reverse conversion from Unicode to non-Unicode is not possible.


The Unicode target SAP system needs more CPU, memory, and disk space than then a non-
Unicode source system. The increased disk space depends on the target database character
set (UTF-8 or UTF-16). See the SAP Unicode articles in the SCN for further details.
R3LOAD dump files are platform independent, so the Unicode conversion can be combined
with a heterogeneous system copy. The OS/DB Migration Check applies to Unicode
conversions changing their target operating system or database system as normal.
In the case where an SAP system upgrade and a Unicode conversion are required, both
activities can be performed in a single downtime running a Combined Upgrade & Unicode
Conversion (CU&UC). The CU&UC method is not available for target systems based on SAP
NetWeaver 7.50 or higher.

Unicode Conversion Challenges


Unicode Conversion Challenges
The following issues can provide challenges for a Unicode conversion:

© Copyright. All rights reserved. 260


Lesson: Performing Unicode Conversions

● Very large databases


● Strict downtime requirements
● Slow hardware
● ABAP Unicode enabling
● MDMP Unicode conversions
● Communication between Unicode and MDMP SAP systems
● Non-SAP interfaces
● Availability of third-party products or add-ons

Very large databases, short downtimes, and slow hardware may require an incremental
system copy approach.
Review ABAP coding using transaction UCCHECK. Byte offset programming or dynamic
programming (data types determined dynamically during program execution) can require
significant effort.
The Unicode Conversion of MDMP (Multi Display Multi Processing) systems require more
effort compared to the ones for SCP (single code page) systems and are increasing with the
number of installed code pages.
In MDMP systems, not all tables provide a language identifier for their data. For this data, a
vocabulary must be created to allow an automated Unicode conversion. The creation and
maintenance of this vocabulary is a time consuming task. The involvement of experienced
consultants shortens this process significantly.
During the Unicode export of MDMP systems, R3LOAD writes *.XML files that are used for
final data adjustments in the target system (transaction SUMG). The *.XML files are only
written to the SCP (Single Code Page) systems for informational purposes during the Unicode
conversion export.
MDMP / Unicode interfaces require significant effort. We recommend minimizing the number
of language-specific interfaces, if possible.
Interfaces to non-SAP systems need an in-depth analysis to identify their code page
requirements.

LESSON SUMMARY
You should now be able to:
● Perform Unicode conversions

© Copyright. All rights reserved. 261


Unit 11
Lesson 2
Performing a Near Zero Down Time System
Copy

LESSON OVERVIEW
In this lesson, you will learn how to run a Near Zero Down Time system copy.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the Near Zero Down Time system copy methodology

Near Zero Downtime Method (NZDT/MDS)


Near Zero Downtime Method (NZDT) — Minimized Downtime Service (MDS)
The following conditions apply to the NZDT/MDS method:

● For large databases that cannot be migrated using standard MIGMON R3LOAD system
copy methods.
● Applicable since version 4.6C.

Note:
The availability is database dependent, check with SAP for more information.

● If you are using BI or SCM systems, check with SAP.


● Only SAP consultants can carry out projects.

Note:
For the current MDS project handling, see SAP Note 693168.

● The usual SAP OS/DB Migration Check service applies.

The NZDT method to migrate SAP systems is an SAP MDS service that uses an incremental
migration approach. This service was developed to migrate very large databases. Compared
to the standard system copy procedure, it can reduce the technical system copy downtime
significantly to few hours or less.
In BI and SCM systems, the creation and deletion of tables (with or without a primary key) and
indexes, as well as structural DDIC changes are quite common. Contact SAP to determine if a
customer-specific NZDT project is possible.

© Copyright. All rights reserved. 262


Lesson: Performing a Near Zero Down Time System Copy

The NZDT method should only be executed by specially trained SAP consultants. It is suitable
for heterogeneous system copies and Unicode conversions (or a combination of both).
Other technical maintenance events like upgrades or the update of enhancement packages
can be performed with this type of Near Zero Downtime procedure, but these topics are not
covered in this course.

NZDT and MDS Features


NZDT and MDS features include the following:

● NZDT workbench is the central migration controller


● Proxy Receiver applies recorded data to the target schema
● Logging of table changes in the source system is trigger-based
● Synchronization of table data between the source and target systems
● Safe synchronization protocol ensures data consistency
● Flexible migration scenarios which are adaptable to customer system requirements

The NZDT Workbench is a standalone ABAP NetWeaver application used to configure and
control the migration process between the source and the target system. During the table
synchronization, the data stream runs through the NZDT Workbench. A Unicode conversion
performs a data translation to Unicode.
The Proxy system is an MCOD NetWeaver system installation in the same database as the
SAP target system. It works in its own DB schema. It receives the data from the Workbench
and stores it in the tables of the target DB schema.
To record table changes, inserts, updates, and deletes, triggers are created for nearly all
tables in the source system. A trigger fires as soon as the content of a table is changed and
inserts the primary key of the changed record into the related log table.
A reliable save synchronization protocol ensures the data consistency between source and
target system. It is a robust implementation that restarts after a transfer aborts, for example,
an issue caused by network problems or unexpected system shutdowns of the target system.
The NZDT procedure (based on the DMIS Add-on) is a type of toolbox, so migration scenarios
can be adapted to specific needs.

NZDT/MDS with Transport Restrictions


NZDT/MDS Migration Scenario
The following is an example scenario in which transport restrictions apply:

● Implements insert, update, and delete triggers for almost all tables
● Only a few tables are excluded from triggers
● R3LOAD-based clone export
● Online delta replay and final delta replay
● R3LOAD-based export and import of tables without triggers
● Transport restrictions apply to ABAP DDIC objects
● Replication of ABAP DDIC changes (only available for special projects)

© Copyright. All rights reserved. 263


Unit 11: Special System Copy Procedures

The table insert, update, and delete triggers will be implemented on almost all tables. The
online delta replay (ODR) transfers the recorded data changes of the triggered tables to the
target system. During the downtime, a final delta replay (FDR) takes place, transferring the
records that were not already synchronized or were changed during the ramp down.
In order to prevent performance issues, tables with an exceptionally high update rate can be
omitted from the triggers.
Tables without triggers need to be imported and exported using R3LOAD during the
downtime. The selection of these tables must be done carefully because they can affect the
minimal achievable technical downtime. Tables containing a large amount of data should be
avoided from this selection when possible.
Tables without primary keys must be imported and exported during the downtime because
they cannot provide a log table entry that can be used to identify a unique source table record
when synchronizing the data between the source and target system.
R3LOAD is used to export the clone system, which is created after the triggers are established
on the source system. The exported data is imported to the target system to start the data
synchronization.
The ODR transfers the logged table changes from the source to the target system, while the
source system is continuously productive.
The FDR takes care of the remaining data that could not be transferred by the ODR. It is
performed during the system downtime after the source system was ramped down and there
is no activity on the system.
The technical downtime duration depends on the following conditions
● The amount of data which was not transferred by the ODR
● The amount of data that must be exported by R3LOAD
● The length of time it takes to perform the consistency check between the source and the
target system (row counting)

The achievable technical downtime is small compared to a conventional database export and
import.
The table structures of tables having triggers can not change during the NZDT process.
Transports intended to modify the structure of triggered tables must be postponed until no
NZDT triggers are active, for example, between test cycles or after the migration is finished.
A NZDT project with ABAP DDIC replication (including replication of creation, modifications,
and deletion of tables, structures) needs to be discussed with SAP. Secondary indexes cannot
be replicated and must be generated as a post migration task. It must be determined if it can
be supported on the respective customer system. This creates additional efforts and requires
the involvement of the DMIS development.

© Copyright. All rights reserved. 264


Lesson: Performing a Near Zero Down Time System Copy

Figure 162: Prepare NZDT Workbench and Source System

First, the NZDT workbench needs to be installed. It is a separate NetWeaver system of the
same release and basis support package as the customer source system with an applied
DMIS add-on.
Next, the same DMIS add-on used on the NZDT workbench must be installed on the source
system. The add-on installation does not modify existing repository or data dictionary
objects, so it can be applied without downtime.
Using the NZDT Workbench, triggers and logging tables are created to record the inserts,
updates, and deletes in the source system database. There must be enough free space for the
logging tables in the source database (usually several hundred GB).
As soon as the triggers are active, the amount of DB logging increases and is at least double
the size than without triggers. A further increase is to be expected when starting the data
synchronization. The DB log backup must be able to handle the larger amount of logs
otherwise it will cause a DB standstill (archiver stuck).

© Copyright. All rights reserved. 265


Unit 11: Special System Copy Procedures

Figure 163: Create Clone and Target System

After the triggers are established in the source system, the database is cloned (copied using
backup/restore or by advanced storage copy techniques). All tables of the clone are exported
using R3LOAD and imported into the target system DB schema.
The target database is used to run two independent SAP systems, each of them on its own DB
schema. The proxy system can be a copy of the NZDT workbench system. This simplifies the
installation as the proxy system requires the same setup. It is an MCOD installation in the
same database as the migration target system.
The migration target system is created from the export of the clone system, but it is not
started until the final data synchronization is complete.
After the import into the target DB schema is finished, table aliases are created to provide a
direct proxy system access to the tables in the target schema. To distinguish the local proxy
tables from the target system tables, the aliases are created using the prefix /1LT/*.
While the export/import is running, preprocessing takes place on the source system. It
examines the NZDT log table entries by searching for records which were modified several
times. When the same record is updated more than once or is deleted afterwards, only the
final updated record or the final deletion needs to be transferred. All intermediate states can
be marked as processed. Preprocessing causes additional database transaction log entries
for every marked record; generally this mechanism reduces the amount of transfer data by
30% to 40%. The same mechanism is used during the online delta replay implicitly.

© Copyright. All rights reserved. 266


Lesson: Performing a Near Zero Down Time System Copy

Figure 164: Online Delta Replay: Synchronize Table Data

The online delta replay table synchronization can be prioritized and parallelized individually
for each table. The number of parallel running jobs can be adapted depending on the current
load of the system. Usually the jobs are distributed over multiple applications servers.
Tables having a high update frequency and a large amount of log entries will be set to a high
transfer priority and transferred in parallel utilizing multiple batch jobs the same time.
The logging tables contain the primary keys of the changed records, additional information
about the type of change (insert, update, delete), a time stamp, and the process status. Every
time a row is inserted, updated, or deleted, a database trigger is fired to update the logging
table.
The synchronization jobs scan the logging tables (in the example, TAB01' and TAB02') for
unprocessed records. These records are transmitted using the NZDT workbench to the proxy
system. The proxy system updates the data in the target system schema by utilizing the
previously created aliases.
A safe protocol makes sure that only the records are marked as completed (processed) that
have been successfully updated in the target database.
In case of a Unicode conversion, the translation to Unicode is performed in the NZDT
workbench.

© Copyright. All rights reserved. 267


Unit 11: Special System Copy Procedures

Figure 165: System Ramp Down

After the online replay is finished to 99.9%, the source system must be ramped down for the
final offline delta replay. This means that there can not be any system activity: users are
locked out, no jobs are running, interfaces are stopped, and so forth to avoid further data
changes.
Last minute jobs or prework must be avoided because there might not enough time to
transfer all the changes before ramp down. The remaining NZDT logging entries needs to be
handled by the final delta replay increasing the technical downtime.

Figure 166: Final Delta Replay — Synchronization Table Data

© Copyright. All rights reserved. 268


Lesson: Performing a Near Zero Down Time System Copy

After the source system is ramped down, the offline (final) delta replay takes place. This
transfers the data for those tables where the online replay was not 100% completed or data
was changed during ramp down.

Figure 167: Export and Import Remaining Tables

The remaining tables that do not have triggers because of technical or performance reasons
are exported and imported by R3LOAD. It should only take a few minutes.
After comparing the row count of the tables that were transferred by the online and the final
delta replay, the technical migration is finished. The target system can be started, and the
source system and the proxy system can be stopped now.

© Copyright. All rights reserved. 269


Unit 11: Special System Copy Procedures

Figure 168: Post Migration Activities

The post migration activities are in addition to the shutdown of the source, workbench, and
the proxy system, and include the removal of the proxy instance DB schema and all the
generated programs that were created in order to transfer the data. The dictionary definitions
of the NZDT log tables are also removed from the target system.
Usually the migration scenario is tested at least two times.
There is no downtime available to perform a final delta replay from the productive source
system during a test run, so the source database is copied a second time after completing the
online delta replay. The resulting clone system is isolated to avoid system activity and is
started to perform the final delta replay simulation as it would run from the production
system including the R3LOAD export and import of the remaining tables.
The NZDT workbench can be used multiple times, but the proxy and target system needs to
be reinstalled for each run. It makes sense to restore them from a backup taken after the
initial system setup.

LESSON SUMMARY
You should now be able to:
● Describe the Near Zero Down Time system copy methodology

© Copyright. All rights reserved. 270


Unit 11
Lesson 3
Describing SAP HANA Storage

LESSON OVERVIEW
In this lesson, you will learn how to describe SAP HANA storage.

LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe SAP HANA storage

SAP HANA Appliance


SAP HANA Naming
The following are facts about the meaning of SAP HANA:

● Originally SAP HANA meant SAP High-Performance Analytic Appliance.


- Appliance means a specific combination of hardware and software.
● Today SAP HANA is not only an in-memory database, but also includes an application
platform that runs a variety of products and applications.

SAP HANA is an in-memory database and application platform, which, for many operations, is
10 to 1000 times faster than a regular database. This allows for a simplification of design and
operations and real-time business applications.
SAP HANA has numerous engines providing various algorithms for in-memory computing and
application libraries to develop applications that run directly on SAP HANA. The libraries are
linked dynamically to the SAP HANA database kernel.
A SAP HANA appliance is a specific combination of hardware and SAP database software.
The SAP HANA appliance software can only be installed by SAP certified hardware partners
on validated hardware running a specific operating system. The appliance comes
preconfigured (hardware and software installed). It provides a fast implementation because
the system is ready to run out of the box.
With the introduction of SAP HANA Tailored Datacenter Integration (TDI), customers are
allowed to install their own SAP HANA system on certified hardware. This is more flexible than
the SAP HANA appliance approach. To ensure quality and consistency in the installations in
the customer datacenter, SAP has setup a special certification program for installing SAP
HANA systems.

© Copyright. All rights reserved. 271


Unit 11: Special System Copy Procedures

Note:
For more information see the following SAP Notes:
● 1599888 SAP HANA: Operational Concept
● 1514967 SAP HANA: Central Note

SAP HANA Storage Methods


SAP HANA — Migration Relevant Storage Methods
The following migration relevant storage methods are available:

● Column store
● Row store
● Compression
● Insert only on delta merge

Opposed to classic databases, SAP HANA holds most table data in memory. Not only does
this require a large amount of memory, but it needs an efficient way to store and retrieve the
data. For that purpose, most of the data is stored in columns (column store table) and not by
rows (row store tables) like in conventional database architectures. This concept allows for a
very high compression ratio because the number of repeated bytes in the same column but in
different rows of a table is much higher than inside the row itself. This allows, in combination
with advanced data selection strategies, a very fast in-memory data access.
Write operations on compressed data would be costly since it requires reorganizing the
storage structure. Therefore, write operations in a column store do not directly modify
compressed data. All changes go into a separate area called the delta storage. The delta
storage exists only in main memory. Only delta log entries are written to the persistence layer
when delta entries are inserted. The delta merge operation (updating the compressed column
store) is decoupled from the execution of the transaction that performs the changes. This
happens asynchronously at a later point in time.
In depth delta merge information can be found at http://scn.sap.com/docs/DOC-27558.
Some tables still need to be stored in rows because of their small size or usage (for example.
fast change frequency). R3LOAD creates tables for column or row store, while importing to a
SAP HANA database. Prior to SAP NetWeaver 7.40, SAP provides a list of row store tables to
be used during migration import, generated by SMIGR_CREATE_DDL. Starting with SAP
NetWeaver 7.38 and 7.40, the Row Store/Column Store information is managed in the ABAP
DDIC and R3LDCTL, which extends the table type with a row/column store flag in *.STR files.
Row store tables are held completely in memory. Column store table content is loaded on
demand or can be preloaded (for example by SQL scripts).

SAP HANA — Migration Relevant Resources


The following migration resources are available:

● Table partitioning
● Multicore architecture

© Copyright. All rights reserved. 272

You might also like