0% found this document useful (0 votes)
319 views15 pages

Syniti ADM Training

The document outlines the data migration process, detailing steps such as preparing relevancy reports, joining tables, mapping fields, and using tools like Assemble and Collect. It also describes the roles and responsibilities of a team leader overseeing the migration, including daily updates and discussions with functional teams. Additionally, it covers error handling, report generation, and the importance of maintaining data integrity during the migration process.

Uploaded by

Ankit Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
319 views15 pages

Syniti ADM Training

The document outlines the data migration process, detailing steps such as preparing relevancy reports, joining tables, mapping fields, and using tools like Assemble and Collect. It also describes the roles and responsibilities of a team leader overseeing the migration, including daily updates and discussions with functional teams. Additionally, it covers error handling, report generation, and the importance of maintaining data integrity during the migration process.

Uploaded by

Ankit Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Data Migration Flow :

1) First we prepare relevancy report from the source table in Prep area based on the
extraction rules provided.

2) Then we join our target table from prep area to the main work stream area so that we
get only relevant records in the main workstream area. Joining will be on primary
key.---ADMM .
Then in the mainstream we filter the value in the assemble where clause by taking
target table values from the prep area.---ADM—the code is generated in the cranport
package which we can execute and see the count.

3) Then we map the fields as per the mapping sheet.

Roles and Responsibilities:

I am handling a team of three members .


I used to do daily standup call with team members for taking the status update of the build
and also helping them in learning Syniti .
I am going in the mapping sheet discussion session with the functionals to discuss with
business .
And also working on the building the object end to end using Syniti .

COLLECT

We use collect only when we have data in the source.


1) We add source and target table and save it..then we click on build and refresh
Build will create the assemble package(assemble package is nothing but a query
which helps in moving the data from source to target)
Refresh- there are two types of refresh: source and target
Source refresh- when we refresh the data from the source system
Target refresh- once we have loaded the data in sap and we want the data to
reflect in sql so that we can generate zloaded flag (for the records which are loaded
it will be 1 ) and also the postload report.
Refresh will run the query in assemble package

Q)How do you apply where clause in target table in collect.


1)We go to collect- go to vertical view- then go to advance setting- then we click on
edit and write the where condition- then click on build and refresh

Q)When you click on refresh of collect then what happens and how does data
flow.
There is a assemble package at the backend which will run.

Copy of source data-SDBDAP_ECC


DSW- staging area
DGSAPS4- Target

ASSEMBLE

In the Assemble we go to source database(SDB) >> Targets >> we give


the package name (db_packagename.imp)>> source id
( DSWSOURCE_Filepath) >> Source type ( Excel) >> Target type
( Table ), we create the assemble package for the table and then run it.
Once we run it will create a source table in source DB
Source can be excel, text, file delimited
Target is always table

Q-IN ASSEMBLE we create assemble package in which database:


Source database

 When we are importing text file, we need to give delimiter in


vertical view in general

 When we are importing multiple tabs in excel we need to specify


them-we go to advance properties in source excel worksheet
and add.
 Target delete records- when we tick the target delete records in
vertical view- it deletes the existing data in the target table and
add the new data.(if we don’t tick then it will append and it may
create duplicate records)

Q) Can we use assemble and collect at the same time


Yes, we can use assemble and collect at the same time when we
have to use multiple sources.

Your source data, source id will be your DSW_SOURCE_FILEPATH


CONSOLE
Console consist of hierarchy- it has wave, process area and objects.
Wave- It defines a particular country or region.
Process area- module in wave eg: finance, pp, O2C and so on
Objects- Objects are basically the tables on which we are going to work on in a
particular process area- these are basically the approved object in the DOML
which requires a build from Syniti.

Why we go to console-To see what all process area and objects are there in
particular wave.
Can we add wave and process area- yes, there is an option to add it.

What is a context- combining wave and process area is called a context.


DESIGN

We come to design basically to tell ADM that which target tables we are going to design for
our development.
 We select the source(give source database)
 Add the target table
 Add the developer name
 Import fields(from how many types we can import the field- system type id -
BOASAPECC, Excel etc ?)
 Activate key fields and inscope fields
 We give the usage as natural(which will be loaded in the target) and utility(They are
just used for the temporary transformation, these fields are not part of your load file
and they also start with Zfields and we can use utility fields for custom fields as well)
 Required(technical(the fields which must have a value and ADM will create a
errors/missing report), business(the fields which are required for business and error
reports gets created), optional(may or may not have data- ADM will not create report)
 We do sync to map

What is system type id- when we click import fields in design then these fields are
coming in ADM from system type ID
System type is kind of directory in ADM where all the table structure are stored.

Key fields are fields which make your data in a table unique

Zlegacy gets created for key fields and check table fields
How to add a object
 Click on ADD
 Give name, description, priority, load type(Eg master data)
 Then save

Go to console
Click on object icon
Click on add
Give the object name(the object which we have created will appear in the dropdown)
Click on target icon for that object
Give priority, Name(ttKNA1), Description, usage, system type id(SAP_S4HANA)
 So what we are doing is we are going to build ttKNA1 which is a copy of table KNA1

Now next step is to give the source

Click on source icon for target which we have created- Target source will open in down

Source data source source type system type id

SDBSAP_ECC Add row SAP_ECC

For importing fields there are three button on the top 1) target import 2) import fields

We will do it by import fields


So from Design we got to know what is our target table and what are the fields we are going to
map

MAPPING

>> We go to target sources in mapping >> click on edit in left and


give the table name which we have created in assemble and then
save it .

>> Then click on three dots >> go to vertical view schema column
and give the primary key as key field and lock it
>> then we go back to vertical view and give the where clause if we
want to apply any filter
1)Click on MAP- click on targets- List of object will appear- click on source(ICON) for our
object- Give (source database object-eg KNA1)- CLICK on three vertical dot- edit- (i)active
field as zactive, (ii) source table details and filter clause can also be seen here- save

2)Next step is field mapping and value mapping


 Copy-one to one copy from the source
 Internal- It will be internally generated by SAP- we just give action type and rules
comment here
 Default- pass the fixed default value
 Manual rule- Complex logic/view as an example if we want to join multiple tables
then we can use manual rule.
 Rule- If we are applying any simple logic to any field
 Rule XREF- In rule xref, first we convert the data by writing some rules and then the
value mapping needs to be done.(we convert the value based on the rule)
 XREF- We use Xref when a particular field requires value mapping.
Value mapping is we convert the source data in such a manner which is going to be
accepted by target system- eg when we have country name such as Australia->
Source- AU and Target- Australia--- it converts AU to Australia using value mapping

After mapping is done click on submit all- First red circle will become amber-once we
approve the mapping the button will turn green.

For joining two tables-


Under Target source-Click on update row source icon- then give the table name
Value mapping
Click on edit for the field which you want to do the value mapping- give action(xref),
source table(LFA1), source field(Land1)

After performing the above step- click on value mapping(config) under configuration-
give the value mapping table(T005)- inside the mapping icon- you can see your field
details.

AUTOMATION/AUTOGEN
1) Select your object- click on create target table- ok
Target table will be created in your database(log will show success)
2) For the target we have created, we need to a create a source table
Click on source icon of the object- page will open in the left- click on source table
icon- ok(log will show: source object has been build)
3) Build all source rules- If any sql statement is incorrect then this step will stop and
will throw up error which we can check in the log.
Note: collect different types of error- COLLATE ERROR , check table error , sometimes if the
field is not configured .
BUILD REPORTS- Reports will be generated based on rules of previous step.
For each table there will be a primary keys, it will create a missing report automatically for
this fields.

We will also set the autogen level by going to object- click on edit- autogen level

What will be the autogen level when building for the first time
ON-Drop and rebuild table, views and procedures
What all types of rules gets created when we click on autogen-
Update, insert and delete rules.
What all types of reports gets created when we click on Build reports-
Missing sel, summary sel, detail sel.

TRANSFORM
Click on transform- You can see the target source of your object here, Package details can also
be seen.

Click on Transform- Screen will be divided in 2 sections


1) Top one is the target section- Where we can see the actual staging table of the
target(ttTable)
2) Bottom one is the source section- When we can see the staging table for the source(ST
table). Here we can have any number of source table.
When we hit the process
Process target
Import package- source rules(in the last step of source rule, data gets inserted in TT
table)- source report- Target rule- target report- remediation rule- export.
When we process target, it’s like a job run in DS and we can schedule this as well.

When the job fails- we can see, notification will turn red and in order to see which section
or which part of the ETL the job has stuck- Click on queue- monitor screen will pop up- you
see the status of the job.
Task inside monitor will give you the details of execution.
Target report type- Error reports(missing sel), target readiness report(detail, summary
sel), info reports(preload, post load), rulebook report(transformation applied in ADM)
How the update rules are updating the rules in the backend
There is a update procedure which updates the rule in the backend.

Which database we create the preload and postload report.


DSW(Staging database)
What does detail, summary and error reports contain
Detail- check configuration value whether value are present in sap or not so that there is
no issue while loading)
Summary- summary has a count of how many distinct errors are there per record.
error – If zerrorflag = 1( record load will not happen)
if zerrorflag= 0(record will load)
Once we click on target source process what is happening in the back end-
It will run your source package- source package will transfer the data from source DB to
staging database(DSW) and after that all the source rule will run.

ERRORS WE GET WHILE EXECUTING THE TRANSFORM :

1. Primary key issue it may be some duplicates are there ( if primary key combination is
not there then we get this error )
So we resolve this by giving a unique value for that field or sometimes the client ask us
to remove the duplicates we can write 2 query with one rule to update the field with
row number and using that we will write the delete rule where the row no is greater
than 1 .

2. String or binary data truncated >>> we should check for which field we are getting this
error ( we can trim it using substring or if the client want all data we can increase the
size of substring in backend )

MIGRATION COCKPIT STEPS :


I did it in two ways –
1. Pushing the data from export ( load program fields ) to cockpit staging tables , I am not
aware of the connections how they are done our architect team used to do that .
2. We were given the synonyms against each table in the cockpit , we used to insert the
data into synonyms from our export and the respective cockpit staging table gets
updated with the data .
3. Next step is to prepare the data in cockpit
4. Mapping of data , all the config fields we map here .
5. Simulation of the data .
6. Migration of data .

Errors while loading to migration cockpit :


1. While uploading the export file in the staging table we get errors for date formats like
sometimes we insert in the format ddmmyyyy but instead we need to put it
yyyymmdd which is accepted by cockpit . ( we can fix this in Export view and export
file by changing the format )

2. Primary key field missing data error ( Primary key NOT NULL and UNIQUE
constraint ) we can resolve this by adding the data where data is missing , we will
connect with the concerned person for the confirmation or data correction .

3. Pre configured field value error – like there is field with name as ABC , it has
configured values in cockpit as LT,LM,LN if ever we try to put any other value the
cockpit wont accept it .

4. Length issues of field

How do we compare postload and preload :

In the postload report we take legacy fields , S4 fields and we generate one compare flag . we
use one comparing function to compare the legacy and S4 fields , if the compare flag returns
1 then the same records got migrated and if it returns 0 then the wrong record got migrated .
Q. Have you created any error reports and why did you create them ?
I have created >> eg : we have mapping sheet for KNA1 , we need to check that the email id is
in correct format or not or date format for that I have created the error report and sent to
client
Q . Diff b/w source rule and Target rules :
Source Rule : if we want to truncate the data , update the data then we write source rules .
Target Rule : We write target rules when there comes some requirement of changing the data
after loading it at that point of time we directly write the rules in target .

Examples of Objects in SMM :


EQUI – Equipment
VBAP – Sales Order
EKPO – Purchase Order
MCHA – Batch master General data
MCHB – Batch Stock

Examples of Objects in Customer :


KNA1 – Customer General data
KNVV- Customer sales data
KNBK – Bank Data
KNB1 – Company code data

Examples of Objects in Finance :


AP – Accounts Payable ( Transactional data )
AR – Accounts Receiveable ( Transactional data )
GL / Accounts ( Transactional data )

You might also like