Informatica IICS Interview Questions
Informatica IICS Interview Questions
3. What is a Synchronization task?
Synchronization task helps you synchronize data between a source and target. A Synchronization task can be built
easily from the IICS UI by selecting the source and target without use of any transformations like in mappings. You
can also use expressions to transform the data according to your business logic or use data filters to filter data
before writing it to targets and use lookup data from other objects and fetch a value. Anyone without Powercenter
mapping and transformation knowledge can easily build synchronization tasks as UI guides you step by step.
7. What metadata information gets stored in the Informatica Cloud (IICS) repository?
Informatica Cloud Services includes the IICS repository that stores various information about tasks. As you create,
schedule, and run tasks, all the metadata information is written to IICS repository.
The various information that gets stored to IICS repository include:
Source and Target Metadata: Metadata information of each source and target including the field names, datatype,
precision ,scale and other properties.
Connection Information: The connection information to connect specific source and target systems in an
encrypted format.
Mappings: All the Data integration tasks built, their dependences and rules are stored.
Schedules: The schedules created you run the task built in IICS are stored.
Logging and Monitoring information: The results of all the jobs are stored.
13. What is the difference between a Union transformation in Informatica Cloud vs Informatica Powercenter?
In earlier versions of Informatica Cloud, the Union transformation allows only two groups to be defined in it. Hence
if three different source groups needs to be mapped to target, the user must use two Union transformations. The
output of first two groups to Union1. The output of Union1 and group3 to Union2.
In the latest version, Informatica Cloud is supporting multiple groups. So all the input groups can be handled in a
single Union transformation.
19. What are the parameter types available in the Informatica Cloud?
You can add parameters to mappings to create flexible mapping templates that developers can use to create
multiple mapping configuration tasks. IICS supports two types of parameters.
Input Parameter: Similar to a parameter in Powercenter. You can define an input parameter in a mapping and set
the value of the parameter when you configure a mapping task. The parameter value remains constant as the
value defined in mapping task or a Parameter file through out the session run.
In-Out Parameter: Similar to a variable in Powercenter. Unlike input parameters, an In-Out parameter can change
each time a task runs. When you define an In-Out parameter, you can set a default value in the mapping. However,
you would typically change the value of In-Out Parameter at run time using an Expression transformation using
SETVARIABLE functions. The mapping saves the latest value of the parameter after the successful completion of
the task. So, when the task runs again, the mapping task compares the In-Out parameter to the saved value
instead of default value.
21. When Source is parameterized in a Cloud mapping, the source transformation fields would be empty. Then
how does the fields get propagated from source to the downstream transformations in source parameterized
mappings?
In order to propagate the fields to downstream transformations when source is parameterized, initially create the
mapping with actual source table. In the downstream transformation after source, select the Field Selection
Criteria as Named Fields and include all the source fields in the Incoming Fields section of the transformation. Then
change the source object to a parameter. This way the source fields are still retained in the downstream
transformation even when the fields are not available in source transformation after the source is parameterized.
22. To include all incoming fields from an upstream transformation except those with dates, what should you
do?
Configure two field rules in a transformation. First, use the All Fields rule to include all the fields coming from
upstream transformation. Then, create a Fields by Datatypes rule to exclude fields by data type and select
Date/Time as the data type to exclude from incoming fields.
24. What are Field Name conflicts in IICS and how can they be resolved?
When there are fields with same name coming from different transformations into a downstream transformation
like a Joiner transformation, the cloud mapping designer generates a Field Name Conflict error. You can either
resolve the conflict by renaming the fields in the upstream transformation only or you can create a field rule in
downstream transformation to Bulk Rename fields by adding a prefix or a suffix to all incoming fields.
25. What system variables are available in IICS to perform Incremental Loading?
IICS provides access to following system variables which can be used as a data filter variables to filter newly
inserted or updated records.
$LastRunTime returns the last time when the task ran successfully.
$LastRunDate returns only the last date on which the task ran successfully. The values of $LastRunDate and
$Lastruntime get stored in Informatica Cloud repository/server and it is not possible to override the values of these
parameters. These parameters store the datetime value in UTC time zone.
26. What is the difference between the connected and unconnected sequence generator transformation in
Informatica Cloud Data Integration?
Sequence generator can be used in two different ways in Informatica cloud. One with Incoming fields disabled and
the other with incoming fields not disabled.
The difference between the sequence generator with incoming fields enabled and disabled is, when NEXTVAL field
is mapped to multiple transformations,
→ Sequence generator with incoming fields not disabled will generate same sequence of numbers for each
downstream transformation.
→ Sequence generator with incoming fields disabled will generate Unique sequence of numbers for each
downstream transformation.
more
Instead of providing some scenario based Interview questions and solutions to them I would like to take a different
approach here.
We shall take a “concept” and discuss what kind of scenarios based Interview questions that could be built around
it.
Chapter-1: Adding Sequence numbers to the Source records
Chapter-2: Introducing a dummy field
Chapter-3: Comparing Current record with Previous record
Chapter-4: LEAD and LAG implementation in Informatica Cloud
Chapter-5: Denormalizing data in Informatica Cloud
Let us discuss in detail how to add sequence numbers to your source data and in what kind of scenarios do we
need to do it.
Contents
I. Introduction: How to add sequence numbers to your source data?
II. Generating sequence numbers using Expression transformation
III. Generating sequence numbers using Sequence Generator transformation
Q1. Design a mapping to load alternate records to different tables
Q2. Design a mapping to load alternate group of three records into three different targets
Q3. Design a mapping to load every 150th record to the target table
IV. Conclusion
I. Introduction: How to add sequence numbers to your source data?
In the expression transformation create a variable port and assign a value as incrementing the same variable by 1.
Next create an output port and assign the variable to the output port.
In expression transformation, the ports will be as below
V_count = V_count+1
O_count = V_count
Sequence Generator transformation is a passive and connected transformation in Informatica Cloud. It is used to
generate sequence of numeric values.
To generate the sequence numbers, we always use the NEXTVAL column from sequence generator transformation.
Description: The question says alternate records i.e the 1st, 3rd, 5th … records should be loaded in Target1 and 2nd,
4th, 6th … records should be loaded in Target2.
It means the odd and even records should be loaded in different targets.
Solution:
Flag = MOD(V_count,2)
6. Map the fields from expression to a router transformation. In router create two new output groups.
Even Flag=0
Odd Flag=1
7. Map the records from Odd output group to Target1 and Even output group to Target2.
NOTE: If you are using a sequence generator transformation, there is an additional step involved. You need to map
fields from source to sequence generator and then to expression. In expression create only one output port ‘Flag’
and assign value as MOD(NEXTVAL,2). The rest of the procedure is same.
Q2. Design a mapping to load alternate group of three records into three different targets
Description: The requirement here is to load the 1st record into 1st target, 2nd record into 2nd target, 3rd record into
3rd target. When the 4th record comes it should begin loading again from 1 st target and the sequence continues.
Solution:
In the earlier case as there are only two targets we used the odd/even strategy.
In this case in order to achieve this we have to increment the count of records from 1 to 3 and for the next record
when the count becomes >3, we need to reset the count back to 1.
Row_Number = V_count
5. Map the fields from expression to a router transformation. In router create three new output groups.
Group1 Row_Number=1
Group2 Row_Number=2
Group3 Row_Number=3
6. Map the records from each group to the respective target transformations.
NOTE: This can also be implemented using a sequence generator transformation. The only change required is to
replace expression with sequence generator. In the sequence generator properties enable the Cycle option. Set
the Cycle Start Value as 1 and End Value as 3. The rest of the procedure is same.
The ODD/EVEN example we discussed earlier can also be implemented using this approach by resetting the value
to 1 whenever count becomes >2.
Description: For example, if the source have 300 records, only the 150 th and 300th record should be loaded.
Solution:
As discussed in above examples, we can implement the solution here in multiple methods.
Method1:
1. Map the fields from source to an expression transformation.
2. Create a new variable field of type integer V_count and assign value as V_count+1
3. Create a new output field of type integer Flag and assign value as MOD(V_count,150).
4. In expression the ports will be as below
V_count = V_count+1
Flag = MOD(V_count,150)
5. Map the fields from expression to a Filter transformation and provide filter condition as Flag=0
6. Map the records from filter to target transformation.
Method2:
This method is similar to the above discussed method except a change in expression transformation.
The ports in expression transformation should be as below
V_count = IIF(V_count=150,1,V_count+1)
Flag = V_count
IV. Conclusion
The above two methods can also be implemented using Sequence generator transformation. That makes it four
different approaches to implement the solution.
In order to access data from Salesforce, a connection needs to be created initially from Administrator’s tab.
The connection information created can then be used in Data Integration tasks either to retrieve data or process
data into the Salesforce.
In order to avoid using Security Token while setting up a Salesforce connection, Informatica Cloud
IP ranges should be entered in the Trusted IP Ranges in the Salesforce Application.
The Informatica Cloud Trusted IP ranges can be found by hovering over the question mark
symbol available next to Security Token option in a Salesforce connection.
3. What is PK Chunking?
PK Chunking is an advanced option that can be configured for Salesforce Bulk API tasks in a
Source transformation.
If PK Chunking is enabled, then Salesforce internally generates separate batches based on the PK
Chunking size given. Each batch is a small chuck of a bulk query created based on the Primary
Key(ID) of the queried records.
It is recommended to enable PK chunking for objects with more than 10 million records. This
improves performance.
For example, let’s say you enable PK chunking Account table with 10,000,000 records. Assuming
a chunk size of 250,000 the query is split into 40 small queries. Each query is submitted as a
separate batch.
4. How to add a filter condition in a source transformation when the source is a Salesforce object?
When the source is a salesforce object, defining the filter condition under the Filter option
available under Query Options won’t work. The Filter condition needs to be defined under SOQL
Filter Condition available under Advanced tab of source transformation.
The Filter option under Query Options tab can be used only if the filter condition needs to be
parameterized.
Alternatively, the Source Type can be selected as Query and the filter condition can be passed in
the source query defined.
5. How does a Boolean field from Salesforce object is processed in Informatica Cloud?
The Boolean fields from Salesforce are read as Integer fields when read from Informatica cloud
tasks. The Salesforce field values TRUE and FALSE are read as ’1’ and ‘0’ by Informatica
respectively.
In order to insert/update the Boolean fields in salesforce object, define an integer field of value
‘1’ or ‘0’ and the value in salesforce will be updated with either TRUE or FALSE accordingly.
Salesforce objects can be used in a connected Lookup transformation to look up data. But the
Informatica Cloud do not support using Salesforce objects in Unconnected Lookup
transformation.
8. What is the difference between Soft delete and Hard delete of Salesforce records? How to enable them from
Informatica Cloud?
When you delete records from Salesforce object, they are moved to Recycle Bin and these
records are said to be soft deleted. The soft deleted records can be restored.
When you delete records permanently from Salesforce object, they are said to be hard deleted.
The hard deleted records cannot be restored.
To enable hard delete of salesforce records from IICS, check the Hard Delete option when you
select the Delete operation in Target transformation. If the option is not selected, the delete
operation is considered as soft delete.
9. How to Include archived and deleted rows in the source data queried from Informatica Cloud?
When Salesforce objects are used as a source, by default the archived and deleted records in the
salesforce object are omitted from search query. To include the archived and deleted records in
search query enable the Include archived and deleted rows in the Source option available under
Query Options of a source transformation.
10. What is the default timezone in which date fields are stored in Salesforce objects?
In Salesforce by default the DateTime fields store the time information in UTC timezone and
display the appropriate date and time to the user based on the user’s personal timezone settings.
11. What are the advanced properties available for a Salesforce target?
Maximum number of records the agent writes to a Salesforce target in one batch. Default is 200
records.
This property is not used in Bulk API target sessions.
Set Fields to Null
Replaces values in the target with null values from the source.
By default, the agent does not replace values in a record with null values during an update or
upsert operation.
Use SFDC Error File
Generates the error log files for a Bulk API target session. By default, the agent does not generate
the error log files.
To generate an error log file for a Bulk API target session, select the Monitor Bulk option.
Use SFDC Success File
Generates the success log files. By default, the agent does not generate the success log files.
To generate a success log file for a Bulk API target session, select the Monitor Bulk option.
Salesforce API
Defines which API method to be used to process records while loading data into Salesforce. The
two option available are Standard API and Bulk API.
By default, the agent uses the Salesforce Standard API.
12. What are the advanced properties available for a Salesforce target with Bulk load?
Monitor Bulk:
You can enable a Bulk API task for monitoring. With monitoring enabled, the Data Integration
service requests the status of each batch from the Salesforce and logs them in the session log.
By default, the Data Integration service does not monitor Bulk API jobs. Without monitoring, the
activity log and session log contains information about batch creation, but does not contain
details about batch processing or accurate job statistics.
Enable Serial Mode
The Salesforce service can perform a parallel or serial load for a Bulk API task. By default, it
performs a parallel load.
In a parallel load, the Salesforce service writes batches to target at the same time. In a serial
load, the Salesforce service writes batches to targets in the order it receives them.
Use a parallel load to increase performance when you are not concerned about the target load
order. Use a serial load when you want to preserve the target load order.
Test your Understanding
more
Contents
I. Introduction to Informatica Cloud Transformations
o Connected Transformation
o Unconnected Transformation
o Active Transformation
o Passive Transformation
1. Source Transformation in Informatica Cloud
o 1.1 Database objects as Source Object
o 1.2 Flat Files as Source Object
o 1.3 Fields section in Source Transformation
o 1.4 Partitions section in Source Transformation
2. Filter Transformation in Informatica Cloud
o 2.1 Types of Filter Conditions in Informatica Cloud
o 2.2 Simple Filter Condition
o 2.3 Advanced Filter Condition
o 2.4 Parametrization of Filter condition
o 2.5 Filter Performance Tuning
3. Router Transformation in Informatica Cloud
o 3.1 Output Groups in Router Transformation
o 3.2 Default Group in Router Transformation
o 3.3 Filter Condition types in Router Transformation
4. Expression Transformation in Informatica Cloud
o 4.1 Expression Transformation Use Cases
o 4.2 Expression Transformation Field Types
o 4.3 Expression Macros
5. Sorter Transformation in Informatica Cloud
o 5.1 Sorter Transformation Properties
o 5.2 Configuring Sort Fields and Sort Order
o 5.3 Parameterizing the Sort Condition
o 5.4 Sorter Transformation Advanced Properties
6. Aggregator Transformation in Informatica Cloud
o 6.1 Aggregator Transformation Properties
o 6.2 Types of Aggregator Cache
o 6.3 How Sorted Input increases Performance in Aggregator?
o 6.4 What happens when no fields are selected as Group By Fields in Aggregator?
7. Sequence Transformation in Informatica Cloud
o 7.1 Sequence Generator Fields
o 7.2 Sequence Generator Types in Informatica Cloud
o 7.3 Sequence Generator Properties
8. More Transformations…….
o Informatica Cloud (IICS) REST V2 Connector & WebServices Transformation
o Shared Sequences in Informatica Cloud (IICS)
o Joiner Transformation in Informatica Cloud (IICS)
o Normalizer Transformation in Informatica Cloud (IICS)
o Union Transformation in Informatica Cloud (IICS)
o Lookup Transformation in Informatica Cloud (IICS)
I. Introduction to Informatica Cloud Transformations
Transformations are mapping objects which represents the operation that needs to be performed on the data in
the Informatica Cloud Data Integration mappings. There are several transformations that are offered by
Informatica Cloud with each having its own properties and operation that it performs.
The transformations in Informatica Cloud are classified based on their connectivity and how they handle the rows
passing through them.
1. Connected Transformation
2. Unconnected Transformation
Connected Transformation
A Connected transformation is an inline transformation which stays in the flow of the mapping and connected to
other transformations in the mapping.
Unconnected Transformation
An Unconnected transformation is not connected to any other transformation in the mapping. They are usually
called within another transformation and returns a value to that transformation.
Based on how the transformation handles the rows passing through it, the transformations are classified as
1. Active Transformation
2. Passive Transformation
Active Transformation
An Active transformation can change the number of rows passing through it. Also a transformation is considered
active when it can change the transaction boundary or position of the rows passing through it.
Any transformation that splits or combines the data streams or reduce, expand or sort the data is an active
transformation because it cannot be guaranteed that when the data passes through it, the number of rows and
their position in the data stream are always unchanged.
Passive Transformation
A Passive transformation does not change the number of rows passing through it, maintains the transaction
boundary and position of the rows passing through it.
Source transformation is an active and connected transformation in Informatica Cloud. It is used to read and
extract the data from the objects connected as source. The Source transformation can read data from a single
source object or multiple source objects based on the connection type.
When you configure a Source transformation, you define Source properties on the following tabs of the Properties
panel:
Source tab: Select the Connection, Source Type and Source Object. You can also configure the
advanced properties depending upon the source connection.
Fields tab: Configure the fields of the object selected as source.
Partitions tab: Configure the partitioning type based on the source connection.
1.1 Database objects as Source Object
When the connection type is any database connection, you could select the source type as
Query
Options in Source Transformation
Filter and Sort options are not supported when a custom query is used as source.
Under Advanced tab of Source transformation
When the connection type is a Flat File connection, you could select the source type as
1. New fields can be added from the Fields section of source transformation. If the field is not
present in the database object during the mapping run, the task fails.
2. Existing fields can be deleted from the Fields section. During the mapping run the Integration
service will not try to read the field from the database that is deleted in the source
transformation.
3. The fields can be sorted in the ascending order, descending order or existing Native order based
on the field name.
4. Source fields metadata can be modified (i.e. modifying the field’s datatype, precision and scale)
from the Fields section by clicking on Options and select Edit Metadata.
5. The changes in the source object fields can be synchronized by clicking the Synchronize button.
You can choose between synchronizing All Fields or New Fields only.
6. For Flat File sources, an additional field can be added which gives the source flat file name as
value by selecting the Add Currently Processed Filename field in the Fields section.
1.4 Partitions section in Source Transformation
Partitioning enables the parallel processing of the data through separate pipelines.
There are two major partitioning methods supported in Informatica Cloud Data Integration.
For a detailed information about partitions, check out the article on Informatica Cloud Partitioning.
The filter condition defined in the filter transformation is an expression that returns either TRUE or FALSE. The
default return value from filter transformation is TRUE. That means you can add a filter transformation in the
mapping without any condition defined and it allows all the records pass through it.
Similarly, you can define filter condition as FALSE which acts as a logical stop of the flow of the mapping as no
records will be passed further. This Helps while checking the logic of the mapping in case of some problems.
Simple
Advanced
Completely Parameterized
Filter conditions are case sensitive. You can use the following operators in filter transformation
= (equals)
< (less than)
> (greater than)
< = (less than or equal to)
> = (greater than or equal to)
! = (not equals)
2.2 Simple Filter Condition
You can create one or more simple filter conditions. A simple filter condition includes a field name, operator, and
value.
When you define more than one simple filter condition, the mapping task evaluates the conditions in the order
that you specify using the AND logical operator.
Simple Filter Condition
You can use an Advanced filter condition to define a complex filter condition. When you configure an advanced
filter condition, you can incorporate multiple conditions using the AND or OR logical operators.
The filter condition could be completely parameterized in filter transformation by creating an Input parameter of
type expression for which value could be passed during the mapping runtime or from mapping task.
Additionally, the field name and the value of the field could be parameterized and used in the simple and advanced
filter conditions.
$DEP_ID$ = $DEP_Value$
Using
Parameters in Filter condition
In the above example, we have created two input parameters. One of type Field and other of type String.
When the parameters are used in filter condition, the simple filter condition to advanced filter condition
conversion is not supported. You need to manually enter the condition.
Use the Filter transformation as close to the source as possible in the mapping. This will reduce
the number of rows to be processed in the downstream transformations.
In case of relational sources, if possible use the filter condition in the source transformation. This
will reduce the number of rows to be read from the source.
3. Router Transformation in Informatica Cloud
Router transformation is an active and connected transformation in Informatica Cloud. It is similar to filter
transformation except that multiple filter conditions can be defined in the router transformation. Where as in
filter transformation you can specify only one condition and drops the rows that do not satisfy the condition.
You can create multiple groups in router transformation with each having its own filter condition. In the below
example you can see multiple groups defined for each department.
For each output group defined, a new data flow will be created which can be passed to downstream
transformations as shown below.
Router transformation with multiple output groups routing data to multiple targets
By default, there will be a DEFAULT group that comes with in router transformation. The rows which do not satisfy
any filter condition are passed through the default group. In our example, departments with ID other then 10 and
20 will be passed through DEFAULT group.
Router filter conditions are not if and else. If a rows satisfies the filter condition in multiple groups, the router
transformation passes data from all the output groups that satisfy the condition.
3.3 Filter Condition types in Router Transformation
The filter condition types supported in Router transformation is similar to that what we have already discussed in
Filter transformation. You can define a simple or advanced filter condition or completely of partially parameterize
the filter condition based on the requirement.
Output Field
Variable Field
Input Macro Field
Output Macro Field
Create an Output Field to perform certain operation for each record in the dataflow and pass it to the downstream
transformation.
Create a Variable Field for calculations that you want to use within the transformation. The variable fields are not
passed to the downstream transformations. They are usually used to hold a temporary value and can be used
directly in the output fields created in the expression transformation.
Informatica Cloud supports Expression Macros which allow you to create repetitive and complex expressions in
mappings. Use Input and Output Macro Field types to implement expression macros.
An Input Macro Field represent the data of multiple source fields. An Output Macro Field represents the
calculations that you want to perform on each input source field.
For example, you need to trim spaces in source fields data before loading into target. To implement this, you need
to apply the TRIM logic for each field separately. Using expression macros, this can be implemented using one
Input Macro field and Output Macro field.
To learn more, check out the detailed article on Expression Macros in Informatica Cloud.
Sorter transformation is an active and connected transformation in Informatica Cloud. Sorter transformation is
used to sort the data based on incoming fields either in ascending or descending order.
When you configure a Sorter transformation, you define Sorter properties on the following tabs of the Properties
panel:
Sort tab: Define the Fields and Sort order on which data coming from upstream transformation
should be sorted.
Advanced tab: Define advanced properties like Distinct of Sorter transformation.
5.2 Configuring Sort Fields and Sort Order
When you specify multiple sort conditions, the mapping task sorts each condition sequentially. The mapping task
treats each successive sort condition as a secondary sort of the previous sort condition. You can modify the order
of sort conditions.
In the above example, the records are sorted department wise first and later the records in each department are
sorted based on salary.
Informatica Cloud supports parameterizing the sort condition in the mapping where you can define the sort fields
and the sort order when you run the mapping or when you configure the mapping task.
Sorter transformation can also remove duplicate records from the incoming data. This can be enabled by
selecting the Distinct option in the Advanced tab.
Advanced properties of Sorter Transformation
The other properties that can be configured from the Advanced tab of sorter transformation are
Case Sensitive: When enabled, the transformation sorts uppercase characters higher than lower
case characters.
Null Treated Low: Treats a null value as lower than any other value. For example, if you configure
a descending sort condition, rows with a null value in the sort field appear after all other rows.
Aggregator transformation is an active and connected transformation in Informatica Cloud. It is Used to perform
aggregate calculations such as sums, averages, counts on groups of data.
When you configure a Aggregator transformation, you define Aggregator properties on the following tabs of the
Properties panel:
Group By tab – Configure Group by Fields to define how to group data for aggregate expressions.
Aggregate tab – Configure an Aggregate field to define aggregate calculations. You can use
aggregate functions, conditional clauses and non-aggregate functions in aggregate fields.
Advanced tab – To improve job performance, you can configure an Aggregator transformation to
use sorted data. To configure it, on the Advanced tab, select Sorted Input
For example, when you wanted to calculate average salary of employees department wise,
select the department_id as Group By Field under Group By tab.
Create new aggregate field Avg_Salary and assign value as AVG(Salary) under Aggregate tab of Aggregator
transformation.
Group By fields in Aggregator Transformation
Aggregator transformation is used to remove duplicates. This can be achieved when all the incoming fields are
selected as Group By fields
When sorted input option is not enabled, the aggregator does not know when a group by field would come. So it
holds the entire data and process record by record.
When Sorted Input is enabled, the group by fields are expected to be sent in a sorted order. It creates an Index
cache on the first group by fields defined and starts adding their values in the data cache.
When the task reads data for different group, it performs aggregate calculations for the cached group, and then
continues with the next group.
There by a set of data is already forwarded from aggregator to the downstream transformation while it is making
aggregate calculations on the next group.
6.4 What happens when no fields are selected as Group By Fields in Aggregator?
When no fields are selected as Group By field, aggregator creates a default index and it will keep on overriding
each record in the data cache. So finally, the last record of the data will be sent as output.
Sequence Generator transformation is a passive and connected transformation in Informatica Cloud. It is used to
generate sequence of numeric values.
The Sequence Generator transformation has two output fields, NEXTVAL and CURRVAL of datatype big int. No
other ports can be added and default ports can’t be removed.
Output Fields in Sequence Transformation
Use the NEXTVAL field to generate a sequence of numbers based on the Initial Value and Increment By properties.
CURRVAL port value is always NEXTVAL+1. If you connect only the CURRVAL port without connecting the NEXTVAL
port, then the mapping task generates a constant value for each row.
Sequence generator can be used in two different ways in Informatica cloud. One with Incoming fields disabled
and the other with incoming fields not disabled.
In order to disable the incoming fields, navigate to Advanced tab of sequence generator and check the Disable
incoming fields option.
The difference between the sequence generator with incoming fields enabled and disabled is when NEXTVAL field
is mapped to multiple transformations
Sequence generator with incoming fields not disabled will generate same sequence of numbers
for each downstream transformation.
Sequence generator with incoming fields disabled will generate Unique sequence of numbers for
each downstream transformation.
To generate the same sequence of numbers when incoming fields are disabled, you can place an Expression
transformation between the Sequence Generator and the downstream transformations to stage the sequence of
numbers.
7.3 Sequence Generator Properties
Initial Value: It is the first value that is generated by the sequence generator transformation. The
default value is 1.
Increment By: – This is the number by which you want to increment the sequence values. The
default value is 1.
End value: It is the maximum value that the sequence generator transformation generates.
Cycle: If enabled, after reaching the end value, the transformation restarts from the initial value.
By default, the option is disabled. If disabled and the sequence reaches the end value specified
with rows still to process, the task would fail with overflow error.
Cycle Start Value: Start value you want the sequence generator to use when cycle option is
enabled. Default is 0.
Number of Cached Values: The default value is 0. When the value is >0, the mapping task caches
the number of sequential values specified and updates the value in repository. Once the cached
values are used up it will again go to the repository. If there are unused sequence numbers in the
cached values, the task will discard them.
Use this option when multiple partitions use the same Sequence Generator at the same time to
ensure each partition receives unique values.
The disadvantage of using Number of Cached Values greater than zero are:
Accessing the repository multiple times during the session.
Discarding of unused cached values, causing discontinuous sequence numbers
Reset: If enabled, the mapping task generates values based on the original Initial Value for each
run. Default is disabled.