CDI July2021 Mappings en
CDI July2021 Mappings en
July 2021
Mappings
Informatica Cloud Data Integration Mappings
July 2021
© Copyright Informatica LLC 2006, 2021
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.
Informatica, Informatica Cloud, Informatica Intelligent Cloud Services, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks
of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://
www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
[email protected].
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Chapter 1: Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Mapping Designer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Mapping templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Mapping configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Configuring a mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Rules and guidelines for mapping configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Rules and guidelines for mappings on GPU-enabled clusters. . . . . . . . . . . . . . . . . . . . . . . 17
Data flow run order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Mapping validation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Validating a mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Mapping data preview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Preview behavior in non-elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Preview behavior in elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Running a preview job. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Viewing preview results for non-elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Viewing preview results for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Testing a mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Mapping tutorial. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Preparing for the mapping tutorial. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Step 1. Create a mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Step 2. Configure a source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Step 3. Create a Filter transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Step 4. Configure a target. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Step 5. Validate and test the mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Step 6. Create a mapping task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Mapping maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Mapping revisions and mapping tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Bigint data conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Table of Contents 3
Chapter 2: Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Input parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Input parameter types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Input parameter configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Partial parameterization with input parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Using parameters in a mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
In-out parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Aggregation types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Variable functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
In-out parameter properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
In-out parameter values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Rules and guidelines for in-out parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Creating an in-out parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Editing in-out parameters in a mapping task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
In-out parameter example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
In-out parameter example for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Using in-out parameters as expression variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Parameter files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Parameter file requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Parameter scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Sample parameter file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Parameter file location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Rules and guidelines for parameter files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Parameter file templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Overriding connections with parameter files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Overriding data objects with parameter files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Overriding source queries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4 Table of Contents
Configuring a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Creating a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Visio template information in tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Template parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Expression macros in Visio templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Parameter files and user-defined parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Object-level session properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Optional objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Rules and guidelines for configuring a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Publishing a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Uploading a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Logical connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Input control options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Parameter display customization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Advanced session properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Pushdown optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Uploading a Visio template and configuring parameter properties. . . . . . . . . . . . . . . . . . . . 99
Visio template revisions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Creating a mapping task from a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Downloading a template XML file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Deleting a Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Visio template example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Step 1. Configure the Date to String template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Step 2. Upload the Visio template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Step 3. Create the mapping task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Table of Contents 5
Preface
Use Mappings to learn how to create and use mappings in Informatica Cloud® Data Integration to define the
flow of data from sources to targets. Mappings also contains information about finding and using objects in
your data catalog and information about creating and using parameters.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at [email protected].
https://network.informatica.com/community/informatica-network/products/cloud-integration
Developers can learn more and share tips at the Cloud Developer community:
https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-
developers
6
https://marketplace.informatica.com/
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
[email protected].
Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and
incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status
of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage,
it will have the most current information. To ensure you are notified of updates and outages, you can
subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services
components. Subscribing to all components is the best way to be certain you never miss an update.
For online support, click Submit Support Request in Informatica Intelligent Cloud Services. You can also use
Online Support to log a case. Online Support requires a login. You can request a login at
https://network.informatica.com/welcome.
The telephone numbers for Informatica Global Customer Support are available from the Informatica web site
at https://www.informatica.com/services-and-training/support-services/contact-us.html.
Preface 7
Chapter 1
Mappings
A mapping defines reusable data flow logic that you can use in mapping tasks. Use a mapping to define data
flow logic that is not available in synchronization tasks, such as specific ordering of logic or joining sources
from different systems.
Mapping
Use a mapping to deploy small to medium data integration solutions that are processed directly by a
Secure Agent group using the infrastructure that you provide, or by the Hosted Agent.
Elastic Mapping
Use an elastic mapping to deploy large data integration solutions. When you run an elastic mapping, the
mapping is pushed to an elastic cluster to achieve faster processing for broad data sets. The
infrastructure of the elastic cluster is automatically optimized based on the number and size of elastic
jobs.
Use the Data Integration Mapping Designer to configure mappings. When you configure a mapping, you
describe the flow of data from source and target. You can add transformations to transform data, such as an
Expression transformation for row-level calculations or a Filter transformation to remove data from the data
flow. A transformation includes field rules to define incoming fields. Links visually represent how data moves
through the data flow.
You can configure parameters to enable additional flexibility in how you can use the mapping. Parameters
act as placeholders for information that you define in the mapping task. For example, you can use a
parameter for a source connection in a mapping, and then define the source connection when you configure
the task.
You can use components such as mapplets, business services, and hierarchical schema definitions in
mappings. Components are assets that support mappings. Some components are required for certain
transformations while others are optional. For example, a business services asset is required for a mapping
that includes a Web Service transformation. Conversely, a saved query component is useful when you want to
reuse a custom query in multiple mappings, but a saved query is not required.
To use mappings and components, your organization must have the appropriate licenses.
8
Mapping Designer
Use the Mapping Designer to create mappings that you can use in mapping tasks.
1. Properties panel Displays configuration options for the mapping or selected transformation. Different options
display based on the transformation type.
Includes icons to quickly resize the Properties panel. You can also manually resize the
Properties panel.
3. Transformation Lists the transformations that you can use in the mapping.
palette To add a transformation, drag the transformation to the mapping canvas.
4. Mapping canvas The canvas where you configure a mapping. When you create a mapping, a Source
transformation and a Target transformation are already on the canvas for you to configure.
Mapping Designer 9
Mapping Designer Description
Areas
6. Inventory panel Lists Enterprise Data Catalog objects that you can add to the mapping as sources, targets, or
lookup objects.
Note: Your organization must have the appropriate licenses in order for you to see the
Inventory panel.
Objects appear in the inventory when you search for objects on the Data Catalog page, select
them in the search results, and add them to the mapping. To delete an object from the
inventory, click "X" in the row that contains the object.
Displays when you click Inventory. To hide the panel, click Inventory again.
7. Parameters panel Lists the parameters in the mapping. You can create, edit, and delete parameters, and see
where the mapping uses parameters.
Displays when you click Parameters. To hide the panel, click Parameters again.
8. Validation panel Lists the transformations in the mapping and displays details about mapping errors. Use to
find and correct mapping errors.
Displays when you click Validation. To hide the panel, click Validation again.
10 Chapter 1: Mappings
Mapping templates
You can use a mapping template instead of creating a mapping from scratch.
Mapping templates are divided into the categories: Integration, Cleansing, and Warehousing.
When you select a mapping template in the New Asset dialog box, you create a mapping that uses a copy of
the mapping template. The created mapping is not an elastic mapping, and you cannot run it on an elastic
cluster.
The mapping contains pre-populated transformations. Click on each of the transformations in the mapping to
see the purpose of the transformation, how the transformation is configured, and which parameters are used.
Mapping templates 11
The following image shows the Augment data with Lookup template with the Source transformation selected.
The Description field shows how the Source transformation is configured:
You can use a mapping template as is or you can reconfigure the mapping. For example, the Augment data
with Lookup template uses the p_scr_conn parameter for the source connection. You can use the parameter
to specify a different connection each time you run the mapping task that uses this mapping. You might want
12 Chapter 1: Mappings
to use the same source connection every time you run the mapping task. You can replace the parameter
p_scr_conn with a specific connection, as shown in the following image:
When you save the mapping, you save a copy of the template. You do not modify the template itself.
Mapping configuration
Use the Mapping Designer to configure a mapping.
Mapping configuration 13
Configuring a mapping
Use the Mapping Designer to configure a mapping.
Follow these steps to create and configure a mapping. For specific information on how to configure each
transformation, see Transformations.
1. Click New > Mappings, and then perform one of the following tasks:
• To create a mapping from scratch, click Mapping or Elastic Mapping and then click Create. The
Mapping Designer appears with a Source transformation and a Target transformation in the mapping
canvas for you to configure.
• To create a mapping based on a template, click the template you want to use and then click Create.
The Mapping Designer appears with a complete mapping that you can use as is or you can modify.
• To edit a mapping, on the Explore page, navigate to the mapping. In the row that contains the
mapping, click Actions and select Edit. The Mapping Designer appears with the mapping that you
selected.
2. To specify the mapping name and location, in the Mapping Properties panel, enter a name for the
mapping and change the location. Or, you can use the default values if desired.
The default mapping name is Mapping followed by a sequential number.
Mapping names can contain alphanumeric characters and underscores (_). Maximum length is 100
characters.
The following reserved words cannot be used:
• AND
• OR
• NOT
• PROC_RESULT
• SPOUTPUT
• NULL
• TRUE
• FALSE
• DD_INSERT
• DD_UPDATE
• DD_DELETE
• DD_REJECT
If the Explore page is currently active and a project or folder is selected, the default location for the
asset is the selected project or folder. Otherwise, the default location is the location of the most recently
saved asset.
You can change the name or location after you save the mapping using the Explore page.
3. Optionally, enter a description of the mapping.
Maximum length is 4000 characters.
4. To configure the source, on the mapping canvas, click the Source transformation.
a. To specify a name and description for the Source transformation, in the Properties panel, click
General.
14 Chapter 1: Mappings
Transformation names can contain alphanumeric characters and underscores (_). Maximum length
is 75 characters.
The following reserved words cannot be used:
• AND
• OR
• NOT
• PROC_RESULT
• SPOUTPUT
• NULL
• TRUE
• FALSE
• DD_INSERT
• DD_UPDATE
• DD_DELETE
• DD_REJECT
You can enter a description if desired.
Maximum length is 4000 characters.
b. Click the Source tab and configure source details, query options, and advanced properties.
Source details, query options, and advanced properties vary based on the connection type. For more
information, see Transformations.
In the source details, select the source connection and source object. For some connection types,
you can select multiple source objects. If your organization administrator has configured Enterprise
Data Catalog integration properties, and you have added objects to the mapping from the Data
Catalog page, you can select the source object from the Inventory panel. You can also configure
parameters for the source connection and source object.
If you are configuring an elastic mapping, and you select an Amazon S3 or a Microsoft Azure Data
Lake connection, you can add an intelligent structure model to the Source transformation for some
source types. You must create the model before you can add it to the mapping. For more
information about intelligent structure models, see Components.
To configure a source filter or sort options, expand Query Options. Click Configure to configure a
filter or sort option.
c. Click the Fields tab to add or remove source fields, to update field metadata, or to synchronize
fields with the source.
d. To save your changes and continue, click Save.
5. To add a transformation, perform either of the following actions:
• On the transformation palette, drag the transformation onto the mapping canvas. If you drop the
transformation between two transformations that are connected, the Mapping Designer automatically
connects the new transformation to the two transformations.
• On the mapping canvas, hover over the link between transformations or select an unconnected
transformation and click the Add Transformation icon. Then select a transformation from the menu.
a. On the General tab, enter a name and description for the transformation.
b. Link the new transformation to appropriate transformations on the canvas.
Mapping configuration 15
When you link transformations, the downstream transformation inherits the incoming fields from
the previous transformation.
For a Joiner transformation, draw a master link and a detail link.
c. To preview fields, configure the field rules, or rename fields, click Incoming Fields.
d. Configure additional transformation properties, as needed.
The properties that you configure vary based on the type of transformation you create.
e. To save your changes and continue, click Save.
f. To add another transformation, repeat these steps.
6. To configure the Target transformation, on the mapping canvas, click Target.
a. Link the Target transformation to the appropriate upstream transformation.
b. On the General tab, enter the target name and optional description.
c. Click the Incoming Fields tab to preview incoming fields, configure field rules, or rename fields.
d. Click the Target tab and configure target details and advanced properties.
Target details and advanced target properties vary based on the connection type. For more
information, see Transformations.
In the target details, select the target connection, target object, and target operation. If your
organization administrator has configured Enterprise Data Catalog integration properties, and you
have added objects to the mapping from the Data Catalog page, you can select the target object
from the Inventory panel. You can also configure parameters for the target connection and target
object.
e. Click Field Mapping and map the fields that you want to write to the target.
7. To save your changes and continue, click Save.
8. To close the mapping, click Close.
• A mapping does not need a Source transformation if it includes a Mapplet transformation and the mapplet
includes a source.
• You can configure multiple branches within the data flow. If you create more than one data flow, configure
the flow run order.
• Connect all transformations to the data flow.
• You can merge multiple upstream branches through a passive transformation only when all
transformations in the branches are passive.
• When you rename fields, update conditions and expressions that use the fields. Conditions and
expressions, such as a Lookup condition or expression in an Expression transformation, do not inherit
field name changes.
• To use a connection parameter and a specific object, use a connection and object in the mapping. When
the mapping is complete, you can replace the connection with a parameter.
• When you use a parameter for an object, use parameters for all conditions or field mappings in the data
flow that use fields from the object.
16 Chapter 1: Mappings
Rules and guidelines for mappings on GPU-enabled clusters
Use the following rules and guidelines when you configure a mapping that runs on a GPU-enabled elastic
cluster:
• The size of a partition file must be smaller than the GPU memory size. To check the GPU memory size,
refer to the AWS documentation for the selected worker instance type.
• The mapping cannot read from an Amazon Redshift source or a source based on an intelligent structure
model.
• The mapping cannot write to a CSV file.
• If the mapping includes NaN values, the output is unpredictable.
• The mapping cannot process timestamp data types from a Parquet source.
• If you need to process decimal data from a CSV file, read the data as a string and then convert it to a
float.
• When the mapping uses an Expression transformation, you can use only scientific functions and the
following numeric functions:
- ABS
- CEIL
- FLOOR
- LN
- LOG
- POWER
- SQRT
To learn how to configure a GPU-enabled cluster, refer to Data Integration Elastic Administration in the
Administrator help.
A flow is all connected sources, targets, and transformations in a mapping. You can have multiple flows in a
mapping but not in an elastic mapping.
You can specify the flow run order for data flows with any target type.
You might want to specify the flow run order to maintain referential integrity when updating tables that have
primary or foreign key constraints. Or, you might want to specify the flow run order when you are processing
staged data.
If a flow contains multiple targets, you cannot configure the load order of the targets within the flow.
Mapping configuration 17
The following image shows a mapping with two data flows:
In the this example, the top flow contains two pipelines and the bottom flow contains one pipeline. A pipeline
is a source and all the transformations and targets that receive data from that source. When you configure
the flow run order, you cannot configure the run order of the pipelines within a data flow.
The following image shows the flow run order for the mapping:
In this example, Data Integration runs the top flow first, and loads Target3 before running the second flow.
When Data Integration runs the second flow, it loads Target1 and Target2 concurrently.
If you add another data flow to the mapping after you configure the flow run order, the new flow is added to
the end of the flow run order by default.
If the mapping contains a mapplet, Data Integration uses the data flows in the last version of the mapplet
that was synchronized. If you synchronize a mapplet and the new version adds a data flow to the mapping,
the new flow is added to the end of the flow run order by default. You cannot specify the flow run order in
mapplets.
Note: You can also specify the run order of data flows in separate mapping tasks with taskflows. Configure
the taskflow to run the tasks in a specific order. For more information about taskflows, see Taskflows.
18 Chapter 1: Mappings
Configuring the data flow run order
Configure the order in which Data Integration runs the data flows in a mapping.
1. In the Mapping Designer, click Actions and select Flow Run Order.
2. In the Flow Run Order dialog box, select a data flow and use the arrows to move it up or down.
3. Click Save.
Mapping validation
Each time you save a mapping, the Mapping Designer validates the mapping.
When you save a mapping, check the status to see if the mapping is valid. Mapping status displays in the
header of the Mapping Designer.
If the mapping is not valid, you can use the Validation panel to view the location and details of mapping
errors. The Validation panel displays a list of the transformations in the mapping. Error icons display by the
transformations that include errors.
In the following example, the Accounts_By_State Target transformation contains one error:
Tip: If you click a transformation name in the Validation panel, the transformation is selected in the Mapping
Designer.
Due to processing differences, the Validation panel shows additional validation errors for some
transformations in elastic mappings. The validation errors result from validation criteria on the Serverless
Spark engine.
Mapping validation 19
Validating a mapping
Use the Validation panel to view error details.
1. To open the Validation panel, in the toolbar, click Validation as shown in the following image:
3. To refresh the Validation panel after you make changes to the mapping, click the refresh icon.
To preview mapping data, your organization must have the appropriate license.
You preview data for a transformation on the Preview panel of the transformation. Select the number of
source rows to process and the runtime environment that runs the preview job.
When you run a preview job, Data Integration creates a temporary mapping task that contains a virtual target
immediately downstream of the selected transformation. Data Integration discards the temporary task after
the preview job completes. When the job completes, Data Integration displays the data that is transformed by
the selected transformation on the Preview panel.
20 Chapter 1: Mappings
The following image shows the Preview panel of a Sorter transformation after you run a preview job on the
transformation:
To preview data in a mapping, you must have the Data Integration Data Previewer role or your user role must
have the "Data - preview" feature privilege for Data Integration.
You can preview data if the mapping uses input parameters. Data Integration prompts you for the parameter
values when you run the preview.
You cannot preview data when you develop a mapplet in the Mapplet Designer. You cannot preview data that
contains special, emoji, and Unicode characters in the table name.
You can preview data for any transformation except for the following transformations:
• Data Masking
• Hierarchy Builder
• Sequence Generator
• Velocity
• Web Services
• Target
You can preview data for any transformation except for the following transformations:
• Normalizer
• Router
• Hierarchy Processor with multiple output groups
• Target
You can preview data for sources except for sources meeting the following conditions:
• Parameterized sources
• Sources with binary fields
A preview job runs on the same cluster as other elastic jobs. If the cluster is saturated with other jobs, the
preview job can take longer to run and return data.
Note: Effective in the July 2021 release, previewing data in elastic mappings is available for preview.
Preview functionality is supported for evaluation purposes but is unwarranted and is not supported in
production environments or any environment that you plan to push to production. Informatica intends to
include the preview functionality in an upcoming release for production use, but might choose not to in
accordance with changing market or technical circumstances. For more information, contact Informatica
Global Customer Support.
Before you run a data preview job for a non-elastic mapping, verify that the following conditions are true:
• Verify that a Secure Agent is available to run the job. You cannot run a preview job using the Hosted
Agent.
• Verify that the Secure Agent machine has enough disk space to store the preview data.
• Verify that there are no mapping validation errors in the selected transformation or any upstream
transformation.
Before you run a data preview job for a elastic mapping, verify that there are no mapping validation errors in
the selected transformation.
22 Chapter 1: Mappings
You can select up to 999,999,999 rows.
Warning: Selecting a large number of source rows can cause storage or performance issues on the
Secure Agent machine.
5. If the part of the mapping that you are previewing uses input parameters, click Next and enter the
parameter values.
6. Click Run Preview.
Data Integration displays the results in the Preview panel for the selected transformation. To view the results
for an upstream transformation in a non-elastic mapping, select the transformation and open the Preview
panel.
You can monitor preview jobs on the My Jobs page in Data Integration and on the All Jobs and Running Jobs
pages in Monitor. Data Integration names the preview job <mapping name>-<instance number>, for example,
MyMapping_1. You can download the session log for a data preview job.
To restart a preview job, run the job again on the Preview panel. You cannot restart a data preview job on the
My Jobs or All Jobs pages.
Data Integration displays preview results on the Preview panel of the selected transformation and each
upstream transformation. Data Integration does not display preview results for downstream transformations.
If a transformation has multiple output groups and you want to preview results for a different output group,
select the output group from the Output Groups menu at the top of the Preview panel.
Data Integration stores preview results in CSV files on the Secure Agent machine. When you run a preview,
Data Integration creates one CSV file for the selected transformation and one CSV file for every upstream
transformation in the mapping. If a transformation has multiple output groups, Data Integration creates one
CSV file for each output group. If you run the same preview multiple times, Data Integration overwrites the
CSV files.
The CSV files are stored in this directory unless an organization administrator changes the value of the
$PMCacheDir property for the Data Integration Server service that runs on the Secure Agent. For more
information about Secure Agent services, see the Administrator help.
Note: Ensure that the Secure Agent machine has enough disk space to store preview data for all users that
might run a data preview using the Secure Agent.
Data Integration purges the preview directory once every 24 hours. During the purge, Data Integration deletes
files that are more than 24 hours old.
To open the Settings dialog, click the settings icon on the Preview panel. Columns in the Selected Columns
area appear on the Preview panel. To hide a column from the Preview panel, select it and move it to the
Available Columns area. To reorder the columns in the Preview panel, select a column name in the Selected
Columns area and move it up or down.
Data Integration Elastic shows complex data cells as hyperlinks on the Preview page. When you click a
hyperlink, the preview data displays in a separate panel.
• Job name, start time, end time, and job status columns show null values.
24 Chapter 1: Mappings
• Preview results for the Sequence Generator transformation may not be consistent from one preview run to
the next. For example when you set the initial value in the transformation to 1, you may get a result of 1, 2,
3, 4 the first time you run a preview. The second time you run a preview, you may get 2001, 2002, 2003,
2004.
Data Integration Elastic purges the preview data once every 30 minutes to delete these data files. Data
Integration Elastic also deletes preview data when a Secure Agent stops a cluster.
To open the Settings dialog, click the settings icon on the Preview panel. Columns in the Selected Columns
area appear on the Preview panel. To hide a column from the Preview panel, select it and move it to the
Available Columns area. To reorder the columns in the Preview panel, select a column name in the Selected
Columns area and move it up or down.
When you perform a test run, you run a temporary mapping task. The task reads source data, writes target
data, and performs all calculations in the data flow. Data Integration discards the temporary task after the
test run.
You can perform a test run from the Mapping Designer or from the Explore page.
To test run a mapping from the Mapping Designer, perform the following steps:
To test run a mapping from the Explore page, perform the following steps:
1. Navigate to the mapping and in the row that contains the mapping, click Actions and select Run.
2. Select the runtime environment and then click Run.
Note that if you select New Mapping Task instead of Run, Data Integration creates a mapping task and saves
it in the location you specify. For more information about mapping tasks, see Tasks.
Mapping tutorial
The following tutorial describes how to create a simple mapping, save and validate the mapping, and create a
mapping task.
For this tutorial, you have a file that contains U.S. customer account information. You want to create a file
that contains customer account information for a specific state.
• Source transformation. Represents the source file that contains account information.
• Filter transformation. Transformation that filters out all account information except for a specific state.
• Target transformation. Represents the target file that contains account information for a specific state.
You create parameters in the mapping so that you can use the same mapping task to create files for each
state. You create the following parameters:
• In the Filter transformation, you create a parameter to hold the state value so that you can use the
mapping to create multiple tasks. You can specify a different state value for each task.
• In the Target transformation, you create a parameter for the target object so that you have separate target
files for each state.
After you define the mapping, you create a mapping task that is based on the mapping. When you run the
mapping task, you specify values for the target object parameter and the filter parameter. The mapping task
then writes the data to the specified target based on the specified state.
1. Create a mapping.
26 Chapter 1: Mappings
2. Configure a source. Specify the name of the source object and the connection to use.
3. Create a Filter transformation. Create a parameter in the Filter transformation to hold the state value.
4. Configure a target. Specify the connection to use and create a parameter for the target object.
5. Validate the mapping to be sure there are no errors. The mapping must be valid before you can run a
task based on the mapping.
6. Create a task based on the mapping. The task includes parameters for the target object and state value,
which you specify when you run the task.
1. Download the sample Account source file from the Informatica Cloud Community and save the file in a
directory local to the Secure Agent. You can download the file from the following link:
Sample Source File for the Mapping Tutorial
2. Create a flat file connection for the directory that contains the saved sample Account source file.
3. On the Explore page, create a project and name the project AccountsByState, and then create a folder in
the project and name the folder Mappings.
1. To create a mapping, click New and then in the New Asset dialog box, click Mapping.
The following image shows the New Asset dialog box with Mapping selected:
2. Click Create. The Mapping Designer appears with a new mapping displayed in the mapping canvas.
Mapping tutorial 27
The following image shows a new mapping in the Mapping Designer:
4. To select a location for the mapping, browse to the folder you want the mapping to reside in, or use the
default location.
If the Explore page is currently active and a project or folder is selected, the default location for the
asset is the selected project or folder. Otherwise, the default location is the location of the most recently
saved asset.
When you design a mapping, the first transformation you configure is the Source transformation. You specify
the source object in the Source transformation. The source object represents the source of the data that you
want to use in the mapping. You add the source object at the beginning of mapping design because the
source properties can affect the downstream data. For example, you might filter data at the source, which
affects the data that enters downstream transformations.
28 Chapter 1: Mappings
2. In the Properties panel, click General and enter src_FF_Account for the Source transformation name.
3. Specify which connection to use based on the source object. In this case, the source object is a flat file
so the connection needs to be a flat file connection.
Click Source and configure the following properties:
Source Type Source object or a parameter, which is a placeholder for the source object that you specify when
you run a task based on the mapping.
Select Single Object.
The following image shows the src_FF_Account details in the Properties panel:
To view the source fields and field metadata, click the Fields tab.
4. To save the mapping and continue, click Save.
You want this mapping to run tasks that filter accounts based on specific states. To accomplish this, you add
a Filter transformation to the data flow to capture state information. You then define a parameter in the filter
condition to hold the state value. When you use a parameter, you can reuse the same mapping to create
multiple tasks. You can specify a different state value for each task. Or you can use the same mapping task
and change the value for state when you run the task.
The sample Account source file includes a State field. When you use the State field in the filter condition, you
can write data to the target based on the state. For example, when you use State = MD as the condition, you
include accounts based in Maryland in the data flow. When you use a parameter for the value of the filter
condition, you can define the state that you want to use when you run the task.
Field rules define the fields that enter the transformation and how they are named. By default, all available
fields are included in the transformation. You might want to exclude unnecessary fields when you have large
source files. Or you might want to change the names of certain incoming fields, for example, when you have
Mapping tutorial 29
multiple sources in a mapping. Field rules are configured on the Incoming Fields tab. For this tutorial, do not
configure field rules.
1. To add a Filter transformation, drag a Filter transformation from the Transformation palette to the
mapping canvas and drop it between the src_FF_Account Source transformation and the NewTarget
transformation.
Note: You might need to scroll through the Transformation palette to find the Filter transformation.
When you drop a new transformation in between two transformations in the canvas, the new
transformation is automatically linked in the data flow, as shown in the following image:
When you link transformations, the downstream transformation inherits fields from the previous
transformation.
2. To configure the Filter transformation, select the Filter transformation on the mapping canvas.
3. To name the Filter transformation, in the Properties panel, click General and enter flt_Filter_by_State
for the Filter transformation name.
4. To create a simple filter with a parameter for the value, click Filter. For the Filter Condition, select
Simple.
5. Click Add New Filter Condition, as shown in the following image:
When you click Add New Filter Condition, a new row is created where you specify values for the new
filter condition.
6. For Field Name, select State.
7. For Operator, select Equals.
8. To parameterize the filter condition value, for Value, select New Parameter.
9. In the New Parameter dialog box, configure the following options:
Display Label Label that shows in the mapping task wizard where you enter the condition value.
Enter Filter Value for State for the label.
30 Chapter 1: Mappings
Filter Condition Description
Detail
Default Value Default value for the filter condition. The mapping task uses this value unless you specify a
different value.
You want to run the task for accounts in Maryland by default, so enter MD.
10. Click OK. The new filter condition displays in the Properties panel, as shown in the following image:
For example, you might want files that only include data for the states you specify in the filter parameter
when you run the mapping task. When you run the task, if you filter for accounts in California, you can select
the file that contains data for California accounts as the target.
Parameter Parameter to use for the target object. This field only appears when you select Parameter as the
target type.
Click New Parameter, and for the parameter name, enter p_StateTargetParameter. For the
display label, enter Accounts for State.
Click OK.
Mapping tutorial 31
The following image shows the properties for the tgt_Accounts_by_State Target transformation:
4. Click Field Mapping and for Field map options, select Automatic. You cannot specify field mappings
because the target object is parameterized. Because you can select different target objects each time
you run the task, the fields in the target objects might not be the same each time you run the task.
5. Click Save. You now have a complete mapping.
You can save a mapping that is not valid. However, you cannot run a task that uses a mapping that is not
valid. An example of an invalid mapping is a Source transformation or Target transformation that does not
have a specified connection, or a mapping that does not have a Source transformation and a Target
transformation.
32 Chapter 1: Mappings
2. If the mapping is not valid, perform the following steps:
a. In the header, click the Validation icon to open the Validation panel.
The Validation panel lists the mapping and the transformations used in the mapping and shows
where the errors occur. For example, in the following image, the tgt_Accounts_by_State Target
transformation has an error:
b. After you correct the errors, save the mapping and then in the Validation panel, click Refresh. The
Validation panel updates to list any errors that might still be present.
3. To test the mapping, click Run.
Mapping tutorial 33
4. In the wizard, select the runtime environment and click Next.
The Targets page appears. The Targets page appears because the target is parameterized. If you create
a mapping that does not include a parameterized target, you will not see the Targets page when you run
the mapping.
The parameter you created for the target object displays on the page with the Accounts for State label,
as you specified when you created the parameter.
You can select a target object or create a new target object. For this tutorial, let's create a new target
object that will include accounts for Texas.
5. On the Targets page, click Create Target.
34 Chapter 1: Mappings
9. Click Run.
The mapping task runs and then returns you to the Mapping Designer.
10. In the navigation bar, click My Jobs. The My Jobs page lists all of the jobs that you have run. At the top,
you should see the mapping task that was created when you ran the mapping, as shown in the following
diagram:
The My Jobs page shows that there were three accounts from Texas, which are now in your
tgt_Accounts_By_State_TX target file.
Now that you have a valid mapping, you can use the mapping task wizard to create tasks based on the
mapping. Because you used parameters in the mapping, each time you run the task, you can change the
parameter values. After the task runs, you have a target file that contains account information for the state
that you specify in the filter condition parameter.
1. To create a mapping task based on the mapping, while still in the Mapping Designer, click Actions and
select New Mapping Task, as shown in the following image:
Mapping tutorial 35
The following image shows the Definition page in the mapping task wizard:
4. Click Next.
The Targets page appears. You can either select a target object from the list or click Create Target to
create a new target.
5. Click Next.
The Input Parameters page appears and displays the Filter Value for State filter condition. You can
change the value of the parameter, if desired.
6. Click Next.
The Schedule page appears. Because this is a tutorial and we do not want to run this mapping task on a
regular basis, leave the default values.
7. Click Finish to save the Accounts by State mapping task.
Mapping maintenance
You can view, configure, copy, move, delete, and test run mappings from the Explore page.
When you use the View action to look at a mapping, the mapping opens in the Mapping Designer. You can
navigate through the mapping and select transformations to view the transformation details. You cannot edit
the mapping in View mode.
When you copy a mapping, the new mapping uses the original mapping name with a number appended. For
example, when you copy a mapping named ComplexMapping, the new mapping name is ComplexMapping_2.
You can delete a mapping that is not used by a mapping task. Before you delete a mapping that is used in a
task, delete the task or update the task to use a different mapping.
36 Chapter 1: Mappings
Mapping revisions and mapping tasks
You might need to update a mapping that is used in a mapping task.
When you update a mapping used in a mapping task, the mapping task uses the revised mapping. If you
change the mapping so that the mapping task is incompatible with the mapping, an error occurs when you
run the mapping task.
For example, you add a parameter to a mapping after the mapping task was created and you do not update
the mapping task to specify a value for the parameter. When you run the mapping task, an error occurs.
If you do not want your updates to affect the mapping task, you can make a copy of the mapping, give the
new mapping a different name, and then apply your updates to the new mapping.
Data Integration does not convert Bigint data in mappings created after the Spring 2020 September release.
Parameters
Parameters are placeholders that represent values in a mapping or mapplet. Use parameters to hold values
that you want to define at run-time such as a source connection, a target object, or the join condition for a
Joiner transformation. You can also use parameters to hold values that change between task runs such as a
time stamp that is incremented each time a mapping runs.
Input Parameters
An input parameter is a placeholder for a value or values in a mapping or mapplet. Input parameters help
you control the logical aspects of a data flow or to set other variables that you can use to manage
different targets.
When you define an input parameter in a mapping, you set the value of the parameter when you
configure a mapping task.
In-Out Parameters
An in-out parameter holds a variable value that can change each time a task runs, to handle things like
incremental data loading. When you define an in-out parameter, you can set a default value in the
mapping but you typically set the value at run time using an Expression transformation. You can also
change the value in the mapping task.
Input parameters
An input parameter is a placeholder for a value or values in a mapping. You define the value of the parameter
when you configure the mapping task.
You can create an input parameter for logical aspects of a data flow. For example, you might use a parameter
in a filter condition and a parameter for the target object. Then, you can create multiple tasks based on the
mapping and write different sets of data to different targets. You could also use an input parameter for the
target connection to write target data to different Salesforce accounts.
38
The following table describes the input parameters that you can create in each transformation:
Source You can use an input parameter for the following parts of the Source transformation:
- Source connection. You can configure the connection type for the parameter or allow any
connection type. In the task, you select the connection to use.
- Source object. In the task, you select the source object to use. For relational and Salesforce
connections, you can specify a custom query for a source object.
- Filter. In the task, you configure the filter expression to use. To use a filter for a parameterized
source, you must use a parameter for the filter.
- Sort. In the task, you select the fields and type of sorting to use. To sort data for a
parameterized source, you must use a parameter for the sort options.
Target You can use an input parameter for the following parts of the Target transformation:
- Target connection. You can configure the connection type for the parameter or allow any
connection type. In the task, you select the connection to use.
- Target object. In the task, you select the target object to use.
- Completely parameterized field mapping. In the task, you configure the entire field mapping for
the task.
- Partially parameterized field mapping. Based on how you configure the parameter, you can use
the partial field mapping parameter as follows:
- Configure links in the mapping and display unmapped fields in the task.
- Configure links in the mapping and display all fields in the task. Allows you to edit links
configured in the mapping.
All You can use an input parameter for the following parts of the Incoming Fields tab of any
transformations transformation:
with incoming - Field rule: Named field. You can use a parameter when you use the Named Fields field
fields selection criteria for a field rule. In the task, you select the field to use in the field rule.
- Renaming fields: Pattern. You can use a parameter to rename fields in bulk with the pattern
option. In the task, you enter the regular expression to use.
Aggregator You can use an input parameter for the following parts of the Aggregator transformation:
- Group by: Field name. In the task, you select the incoming field to use.
- Aggregate expression: Additional aggregate fields. In the task, you specify the fields to use.
- Aggregate expression: Expression for aggregate field. In the task, you specify the expression
to use for each aggregate field.
Data Masking You can use an input parameter for masking techniques in the Data Masking transformation.
In the task, you select and configure the masking techniques.
Expression You can use an input parameter for an expression in the Expression transformation.
In the task, you create the entire expression.
Filter You can use an input parameter for the following parts of the Filter transformation:
- Completely parameterized filter condition. In the task, you enter the incoming field and value,
or you enter an advanced data filter.
- Simple or advanced filter condition: Field name. In the task, you select the incoming field to
use.
- Simple or advanced filter condition: Value. In the task, you select the value to use.
Joiner You can use an input parameter for the following parts of the Joiner transformation:
- Join condition. In the task, you define the entire join condition.
- Join condition: Master field. In the task, you select the field in the master source to use.
- Join condition: Detail field. In the task, you select the field in the detail source to use.
Input parameters 39
Transformation Input parameter use in mappings and tasks
Lookup You can use an input parameter for the following parts of the Lookup transformation:
- Lookup connection. You can configure the connection type for the parameter or allow any
connection type. In the task, you select the connection to use.
- Lookup object. In the task, you select the lookup object to use.
- Lookup condition: Lookup field. In the task, you select the field in the lookup object to use.
- Lookup condition: Incoming field. In the task, you select the field in the data flow to use.
Mapplet You can use an input parameter for the following parts of the Mapplet transformation:
- Connection. If the mapplet uses connections, you can configure the connection type for the
parameter or allow any connection type. In the task, you select the connection to use.
- Completely parameterized field mapping. In the task, you configure the entire field mapping for
the task.
- Partially parameterized field mapping. Based on how you configure the parameter, you can use
the partial field mapping parameter as follows:
- Configure links in the mapping that you want to enforce, and display unmapped fields in the
task.
- Configure links in the mapping, and allow all fields and links to appear in the task for
configuration.
You can configure input parameters separately for each input group.
Rank You can use an input parameter for the number of rows to include in each rank group.
In the task, you enter the number of rows.
Router You can use an input parameter for the following parts of the Router transformation:
- Completely parameterized group filter condition. In the task, you enter the expression for the
group filter condition.
- Simple or advanced group filter condition: Field name. In the task, you select the incoming
field to use.
- Simple or advanced group filter condition: Value. In the task, you select the value to use.
Sorter You can use an input parameter for the following parts of the Sorter transformation:
- Sort condition: Sort field. In the task, you select the field to sort.
- Sort condition: Sort Order. In the task, you select either ascending or descending sort order.
SQL You can use an input parameter for the following parts of the SQL transformation:
- Connection: In the Mapping Designer, select the stored procedure or function before you
parameterize the connection. Use the Oracle or SQL Server connection type. In the task, you
select the connection to use.
- User-entered query: You can use string parameters to define the query. In the task, you enter
the query.
Structure Parser You can use an input parameter for the following parts of the Structure Parser transformation:
- Completely parameterized field mapping. In the task, you configure the entire field mapping for
the task.
- Partially parameterized field mapping. Based on how you configure the parameter, you can use
the partial field mapping parameter as follows:
- Configure links in the mapping that you want to enforce, and display unmapped fields in the
task.
- Configure links in the mapping, and allow all fields and links to appear in the task for
configuration.
40 Chapter 2: Parameters
Transformation Input parameter use in mappings and tasks
Transaction You can use an input parameter for the following parts of the Transaction Control
Control transformation:
- Transaction Control condition: In the task, you specify the expression to use as the transaction
control condition.
- Advanced transaction control condition: Expression. In the task, you specify the string or field
to use in the expression.
Union You can use an input parameter for the following parts of the Union transformation:
- Completely parameterized field mapping. In the task, you configure the entire field mapping for
the task.
- Partially parameterized field mapping. Based on how you configure the parameter, you can use
the partial field mapping parameter as follows:
- Configure links in the mapping that you want to enforce, and display unmapped fields in the
task.
- Configure links in the mapping, and allow all fields and links to appear in the task for
configuration.
You can configure input parameters separately for each input group.
For example, when you create a connection parameter, you can use it as a source, target, or lookup
connection. An expression parameter can represent an entire expression in the Expression transformation or
the join condition in the Joiner transformation. In a transformation, only input parameters of the appropriate
type display for selection.
string
In the task, the string parameter displays as a text box in most instances. A Named Fields string
parameter displays a list of fields from which you can select a field.
connection
Represents a connection. You can specify the connection type for the parameter or allow any connection
type.
• Source connection
Input parameters 41
• Lookup connection
• Mapplet connection
• Database connection in the SQL transformation
• Target connection
expression
Represents an expression.
In the task, displays the Field Expression dialog box to configure an expression.
data object
In the task, appears as a list of available objects from the selected connection.
• Source object
• Lookup object
• Target object
field
Represents a field.
In the task, displays as a list of available fields from the selected object.
field mapping
Represents field mappings for the task. You can create a full or partial field mapping.
Use a full field mapping parameter to configure all field mappings in the task. In the task, a full field
mapping parameter displays all fields for configuration.
Use a partial field mapping to configure field mappings in the mapping and in the task.
• Preserve links configured in the mapping. Link fields in the mapping that must be used in the task.
In the task, the parameter displays the unmapped fields.
• Allow changes to the links configured in the mapping. Link fields in the mapping that can be changed
in the task.
In the task, the parameter displays all fields and the links configured in the mapping. You can create
links and change existing links.
42 Chapter 2: Parameters
You can use field mapping parameters in the following locations:
In the task, the mask rule parameter displays a list of masking techniques. You select and configure a
masking technique in each incoming field.
The Input Parameter panel displays all input parameters in the mapping. You can view details about the input
parameter and the transformation where you use the parameter.
When you create a parameter in the Input Parameter panel, you can create any type of parameter. In a
transformation, you can create the type of parameter that is appropriate for the location.
If you edit or delete an input parameter, consider how transformations that use the parameter might be
affected by the change. For example, if a SQL transformation uses a connection parameter, the connection
type must be Oracle or SQL Server. If the connection parameter is changed so that the connector type is no
longer Oracle or SQL Server, the SQL transformation can no longer use the connection parameter.
To configure a mapping with a connection parameter, configure the mapping with a specific connection.
Then, you can select the source, target, or lookup object that you want to use and configure the mapping.
After the mapping is complete, you can replace the connection with a parameter without causing changes to
other mapping details.
When you use an input parameter for a source, lookup, or target object, you cannot define the fields for the
object in the mapping. Parameterize any conditions and field mappings in the data flow that would use fields
from the parameterized object.
When you create an input parameter, you can use the parameter properties to provide guidance on how to
configure the parameter in the task. The parameter description displays in the task as a tooltip, so you can
add important information about the parameter value in the description.
The following table describes input parameter properties and how they display in a mapping task:
Name Parameter name. Displays as the parameter name if you do not configure a display label.
If you configure a display label, Name does not display in the task.
Display Label Display label. Displays as the parameter name in the task.
Description Description of the parameter. Displays as a tooltip for the parameter in the task.
Use to provide additional information or instruction for parameter configuration.
Input parameters 43
Input parameter Description
property
Type Parameter type. Determines where you can use the parameter. Also determines how the
parameter displays in a mapping task:
- String. Displays a textbox. For the Named Fields selection criteria, displays a list of fields.
- Connection. Displays a list of connections.
- Expression. Displays a Field Expression dialog box so you can create an expression.
- Data object. Displays a list of available objects from the configured connection.
- Field. Displays a list of fields from the selected object.
- Field mapping. Displays field mapping tables allowing you to map fields from the data flow to
the target object.
Connection Type Determines the type of connection to use in the task. Applicable when the parameter type is
Connection.
For example, you select Oracle. Only Oracle connections are available in the task.
Allow parameter Determines whether parameter values can be changed with a parameter file when the task runs.
to be overridden When you configure the task, you specify a default value for the parameter. You define the
at run time parameter value to use in the task in the parameter file.
Applicable for data objects and connections with certain connection types. To see if a connector
supports runtime override of source and target connections and objects, see the help for the
appropriate connector.
Default Value Default value. Displays as the default value for the parameter, when available.
For example, if you enter a connection name for a default value and the connection name does
not exist in the organization, no default value displays.
Allow partial Determines whether field mappings specified during mapping configuration can be changed in the
mapping override task.
Applicable when parameter type is Field mapping.
Do not select Allow Partial Mapping Override if you want to enforce the links you configure in the
mapping.
For example, if you completely parameterize the source filter, you must include a query similar to the
following example:
To partially parameterize the filter, you can specify the field as a variable, as shown in this example:
In this case, the user can select the required field in the mapping task.
To implement partial parameterization, you must use a database connection and a Source transformation
advanced filter or a Filter, Expression, Router, or Aggregator transformation. You can create an input
parameter for one of the fields so that the user can select a specific field in the mapping task instead of
writing a complete query. "String" and "field" are the only valid types.
Note: You can use the same parameter in all the supported transformations.
44 Chapter 2: Parameters
In the following example, the filter condition uses a parameter for the field name:
• If you define a field type parameter in a Source transformation advanced filter, you can reuse it in a
downstream transformation like a Router, Filter, Expression, or Aggregator. You cannot directly use field
type parameters in other transformations.
• To distinguish parameters used for partial parameterization from in-out parameters ($$myVar), represent
the parameter like an expression macro, for example, $<Parameter_Name>$.
• If you use a field type parameter in a Source transformation with multiple objects, qualify the parameter
with the object name. You can either use the object name in the mapping or use a string type parameter to
configure it in a mapping task.
• You cannot pass values for partial parameterization through a parameter file.
• You cannot use a user-defined function in an expression that uses partial parameterization. For example,
the following expression is not valid:
concat($Field$,:UDF.RemoveSpaces(NAME))
Input parameters 45
When you create a mapping that includes source parameters, add the parameters after you configure the mapping.
For example, you have multiple customer account tables in different databases, and you want to run a
monthly report to see customers for a specific state. When you create the mapping, you want to use
parameters for the source connection, source object, and state. You update the parameter values to use
at runtime when you configure the task.
When you create a mapping with a parameterized target that you want to create at runtime, set the target field mapping
to automatic.
If you create a mapping with a parameterized target object and you want to create the target at runtime,
you must set the target field mapping to Automatic on the Target transformation Field Mapping tab.
Automatic field mapping automatically links fields with the same name. You cannot map fields manually
when you parameterize a target object.
In-out parameters
An in-out parameter is a placeholder for a value that stores a counter or task stage. Data Integration
evaluates the parameter at run time based on your configuration.
In-out parameters act as persistent task variables. The parameter values are updated during task execution.
The parameter might store a date value for the last record loaded from a data warehouse or help you manage
the update process for a slowly changing dimension table.
For example, you might use an in-out parameter in one of the following ways:
To view the parameter values after the task completes, open the job details from the All Jobs or My Jobs
page. You can also get these values when you work in the Mapping Designer or through the REST API.
46 Chapter 2: Parameters
Handle incremental data loading for a data warehouse.
In this case, you set a filter condition to select records from the source that meet the load criteria. When
the task runs, you include an expression to increment the load process. You might choose to define the
load process based on one of the following criteria:
• A range of records configured in an expression to capture the maximum value of the record ID to
process in a session.
• A time interval, using parameters in an expression to capture the maximum date/time values, after
which the session ends. You might want to evaluate and load transactions daily.
Parameterize an expression.
You might want to parameterize an expression and update it when the task runs. Create a string or text
parameter and enable Is expression variable. Use the parameter in place of an expression and resolve
the parameter at run time in a parameter file.
For example, you create the expression field parameter $$param and override the parameter value with
the following values in a parameter file:
$$param=CONCAT(NAME,$$year)
$$year=2020
When the task runs, Data Integration concatenates the NAME field with 2020.
Note: Using in-out parameters in simultaneous mapping task runs can cause unexpected results.
• Source
• Target
• Aggregator, but not in expression macros
• Expression, but not in expression macros
• Filter
• Router
• SQL, but not in elastic mappings
• Transaction Control, but not in elastic mappings
For each in-out parameter you configure the variable name, datatype, default value, aggregation type, and
retention policy. You can also use a parameter file that contains the value to be applied at run time. For a
specific task run, you can change the value in the mapping task.
Unlike input parameters, an in-out parameter can change each time a task runs. The latest value of the
parameter is displayed in the job details when the task completes successfully. The next time the task runs,
the mapping task compares the in-out parameter to the saved value.
You can also reset the in-out parameters in a mapping task, and then view the saved values in the job details.
Aggregation types
The aggregation type of an in-out parameter determines the final current value of the parameter when the
task runs. You can use variable functions with a corresponding aggregation type to set the parameter value
at run time.
You can select one of the following aggregation types for each parameter:
• Count
• Max
In-out parameters 47
• Min
Variable functions
Variable functions determine how a task calculates the current value of an in-out parameter at run time.
You can use variable functions in an expression to set the current parameter value when a task runs.
To keep the parameter value consistent throughout the task run, use a valid aggregation type in the
parameter definition. For example, you can use the SetMaxVariable function with the Max aggregation type
but not the Min aggregation type.
The following table describes the available variable functions, aggregation types, and data types that you use
with each function:
SetVariable Sets the parameter to the configured Max or Min All transformation data
value. At the end of a task run, it types except string and
compares the final current value to text data types are
the start value. Based on the available for elastic
aggregation type, it saves a final mappings.
value in the job details.
This function is only available for
non-elastic mappings.
SetMaxVariable Sets the parameter to the maximum Max All transformation data
value of a group of values. types except string and
In an elastic mapping, this function text data types are
is only available for the Expression available for elastic
transformation. mappings.
SetMinVariable Sets the parameter to the minimum Min All transformation data
value of a group of values. types except string and
In an elastic mapping, this function text data types are
is only available for the Expression available for elastic
transformation. mappings.
Note: Use variable functions one time for each in-out parameter in a pipeline. During run time, the task
evaluates each function as it encounters the function in the mapping. As a result, the task might evaluate
48 Chapter 2: Parameters
functions in a different order each time the task runs. This might cause inconsistent results if you use the
same variable function multiple times in a mapping.
Description Optional. Description that is displayed with the parameter in the job details and the mapping
task.
Maximum length is 255 characters.
Is expression Optional. Controls whether Data Integration resolves the parameter value as an expression.
variable Disable to resolve the parameter as a literal string.
Applicable when the data type is String or Text.
Not available for elastic mapping.
Default is disabled.
Default Value Optional. Default value for the parameter, which might be the initial value when the mapping first
runs.
Use the following format for default values for datetime variables: MM/DD/YYYY HH24:MI:SS.US.
Retention Policy Required. Determines when the mapping task retains the current value, based on the task
completion status and the retention policy.
Select one of the following options:
- On success or warning (available for non-elastic mappings)
- On success
- On warning (available for non-elastic mappings)
- Never
Aggregation Type Required. Aggregation type of the variable. Determines the type of calculation you can perform
and the available variable functions.
Select one of the following options:
- Count to count number of rows read from source.
- Max to determine a maximum value from a group of values.
- Min to determine a minimum value from a group of values.
In-out parameters 49
In-out parameter values
An in-out parameter is a placeholder for a value or values that the task applies at run time. You define the
value of the in-out parameter in the mapping and you can edit the value when you configure the mapping
task.
A mapping task uses the following values to evaluate the in-out parameter at run time:
Note:
• If the task does not use a function to calculate the value of an in-out parameter, the task saves the default
value of the parameter as the initial current value.
• An in-out parameter value cannot exceed 4000 characters.
At run time, the mapping task looks for the value in one of these locations, in the following order:
If you want to override a saved value, define a value for the in-out parameter in a parameter file. The task
uses the value in the parameter file.
• When you write expressions that use in-out parameters, you do not need string identifiers for string
variables.
• When you use a parameter in a transformation, enclose string parameters in string identifiers, such as
single quotation marks, to indicate the parameter is a string.
• When you use in-out parameter in a source filter of type date/time, you must enclose the in-out parameter
in single quotes because the value received after Informatica Intelligent Cloud Services resolves the in-out
parameter can contain spaces.
• When required, change the format of a date/time parameter to match the format in the source.
• If you copy, import, or export a mapping task, the session values of the in-out parameters are included.
• You cannot use in-out parameters in a link rule or as part of a field name in a mapping.
• You cannot use in-out parameters in an expression macro, because they rely on column names.
• When you use an in-out parameter in an expression or parameter file, precede the parameter name with
two dollar signs ($$).
• An in-out parameter value cannot exceed 4000 characters.
• For elastic mappings, you cannot use an in-out parameter as a placeholder for an expression.
50 Chapter 2: Parameters
Creating an in-out parameter
You can configure an in-out parameter from the Mapping Designer or the Mapplet Designer.
1. In the Mapping Designer or Mapplet Designer, add the transformation where you want to use an in-out
parameter and add the upstream transformations.
2. Open the Parameters panel.
The In-Out Parameters display beneath the Input Parameters.
When you deploy a mapping that includes an in-out parameter, the task sets the parameter value at run time
based on the parameter's retention policy. By default, the mapping task retains the value set during the last
session. If needed, you can reset the value in the mapping task.
From the mapping task wizard, you can perform the following actions for in-out parameters:
• View the values of all in-out parameters in the mapping, which can change each time the task runs.
In-out parameters 51
• Reset the configuration to the default values. Click Refresh to reset a single parameter. Click Refresh All
to reset all the parameters.
• Edit or change specific configuration details. Click Edit.
For example, the following image shows configuration details of the "Timestamp" parameter and the value at
the end of the last session:
The following image shows an example of the available details, including the current value of the specified
parameter, set during the last run of a mapping task:
The in-out parameters appear in the job details based on the retention policy that you set for each parameter.
52 Chapter 2: Parameters
In-out parameter example
You can use an in-out parameter as a persistent task variable to manage an incremental data load.
The following example uses an in-out parameter to set a date counter for the task and perform an
incremental read of the source. Instead of manually entering a task override to filter source data each time
the task runs, the mapping contains a parameter, $$IncludeMaxDate.
In the example shown here, the in-out parameter is a date field where you want to support the MM/DD/YYYY
format. To support this format, you can use the SetVariable function in the Expression transformation and a
string data type.
Note: You can also configure a date/time data type if your source uses a date format like
YYYY-MM-DD HH:MM:SS. In that case, use the SetMaxVariable function.
In the Mapping Designer, you open the Parameters panel and configure an in-out parameter as shown in the
following image:
• The Source transformation applies the following filter to select rows from the users table where the
transaction date, TIMESTAMP, is greater than the in-out parameter, $$IncludeMaxDate:
users.TIMESTAMP > '$$IncludeMaxDate'
The Source transformation also applies the following sort order to the output to simplify the expression in
the next transformation:
users.TIMESTAMP (Ascending)
• The Expression transformation contains a simple expression that sets the current value of
$$IncludeMaxDate.
In-out parameters 53
The Expression output field, OutMaxDate, is a string type that enables you to map the expression output
to the target.
The SetVariable function sets the current parameter value each time the session runs. For example, if you
set the default value of $$IncludeMaxDate to 2016-04-04, the task reads rows dated through 2016-04-04
the first time it runs. The task sets $$IncludeMaxDate to 2016-04-04 when the session is complete. The
next time the session runs, the task reads rows with a date greater than 2016-04-04 based on the source
filter.
54 Chapter 2: Parameters
You can view the saved expression for OutMaxDate, which also converts the source column to a DATE_ID
in the format YYYY-MM-DD.
• The Target transformation maps the Expression output field to a target column.
When the mapping runs, the OutMaxDate contains the last date for which the task loaded records.
The following example uses an in-out parameter to set a date counter for the task and perform an
incremental read of the source. Instead of manually entering a task override to filter source data each time
the task runs, the mapping contains a parameter, $$IncludeMaxDate. This example is based on a relational
database source with an incremental timestamp column.
1. Create a mapping.
2. Create and define the in-out parameter.
3. Configure the filter condition and source in the Source transformation.
4. Add an Expression transformation and configure the SetMaxVariable expression.
Create a mapping
The in-out parameter is a date field where you want to support the MM-DD-YYYY HH24:MM:SS.US
format. To support this format, you can use the SetMaxVariable function in the Expression
transformation and a date/time data type.
In-out parameters 55
In the Mapping Designer, open the Parameters panel and configure an in-out parameter as shown in the
following image:
Use the Source filtering options in the Source transformation to apply the following filter to select rows
from the users table where the transaction date, TIMESTAMP, is greater than the in-out parameter,
$$IncludeMaxDate:
The Expression transformation contains a simple expression that sets the current value of
$$IncludeMaxDate.
56 Chapter 2: Parameters
The New Field dialog box shows the Field Type as Variable Field, Name as VariableMaxDate, Type as
date/time, and Precision as 29.
The SetMaxVariable function sets the current parameter value each time the task runs. For example, if
you set the default value of $$IncludeMaxDate to 04-04-2020 10:00:00, the task reads rows dated
through 04-04-2020 the first time it runs. For the first task run, you specify the start date based on your
needs. The task sets $$IncludeMaxDate to 11-04-2020 10:00:00 when the session is complete. The next
time the task runs, it reads rows with a date/time value greater than 11-04-2020 10:00:00 based on your
configuration of the Source filtering options.
In-out parameters 57
You can view the saved expression for VariableMaxDate.
After the mapping runs successfully, the in-out parameter contains the last date for which the task
loaded data.
When you enable this option, Data Integration resolves the parameter as an expression. When you disable
this option, Data Integration resolves the parameter as a literal string.
You can use an in-out parameter as an expression variable in the following transformations:
• Expression
• Filter
• Aggregator
• Router
You can override the parameter at runtime with a value specified in a parameter file.
Parameter files
A parameter file is a list of user-defined parameters and their associated values.
Use a parameter file to define values that you want to update without having to edit the task. You update the
values in the parameter file instead of updating values in a task. The parameter values are applied when the
task runs.
You can use a parameter file to define parameter values in the following tasks:
58 Chapter 2: Parameters
Mapping tasks
• Source
• Target
• Lookup
• SQL
Define parameter values for objects in the following transformations:
• Source
• Target
• Lookup
Also, define values for parameters in data filters, expressions, and lookup expressions.
Note: Not all connectors support parameter files. To see if a connector supports runtime override of
connections and data objects, see the help for the appropriate connector.
Synchronization tasks
Define values for parameters in data filters, expressions, and lookup expressions.
PowerCenter tasks
Define values for parameters and variables in data filters, expressions, and lookup expressions.
You cannot use a parameter file if the mapping task is based on an elastic mapping.
You enter the parameter file name and location when you configure the task.
You group parameters in different sections of the parameter file. Each section is preceded by a heading that
identifies the project, folder, and asset to which you want to apply the parameter values. You define
parameters directly below the heading, entering each parameter on a new line.
The following table describes the headings that define each section in the parameter file and the scope of the
parameters that you define in each section:
Heading Description
#USE_SECTIONS Tells Data Integration that the parameter file contains asset-specific parameters. Use
this heading as the first line of a parameter file that contains sections. Otherwise
Data Integration reads only the first global section and ignores all other sections.
[Global] Defines parameters for all projects, folders, tasks, taskflows, and linear taskflows.
[project name].[folder name]. Defines parameters for tasks in the named taskflow only.
[taskflow name] If a parameter is defined in a taskflow section and in a global section, the value in the
-or- taskflow section overrides the global value.
[project name].[taskflow
name]
Parameter files 59
Heading Description
[project name].[folder name]. Defines parameters for tasks in the named linear taskflow only.
[linear taskflow name] If a parameter is defined in a linear taskflow section and in a global section, the value
-or- in the linear taskflow section overrides the global value.
[project name].[linear
taskflow name]
[project name].[folder name]. Defines parameters for the named task only.
[task name] If a parameter is defined in a task section and in a global section, the value in the
-or- task section overrides the global value.
[project name].[task name] If a parameter is defined in a task section and in a taskflow or linear taskflow section
and the taskflow uses the task, the value in the task section overrides the value in the
taskflow section.
If the parameter file does not contain sections, Data Integration reads all parameters as global.
Precede the parameter name with two dollar signs, as follows: $$<parameter>. Define parameter values as
follows:
$$<parameter>=value
$$<parameter2>=value2
For example, you have the parameters SalesQuota and Region. In the parameter file, you define each
parameter in the following format:
$$SalesQuota=1000
$$Region=NW
The parameter value includes any characters after the equals sign (=), including leading or trailing spaces.
Parameter values are treated as String values.
Parameter scope
When you define values for the same parameter in multiple sections in a parameter file, the parameter with
the smallest scope takes precedence over parameters with larger scope.
In this case, Data Integration gives precedence to parameter values in the following order:
If you define a parameter in a task section and in a taskflow or linear taskflow section and the taskflow uses
the task, Data Integration uses the parameter value defined in the task section.
60 Chapter 2: Parameters
For example, you define the following parameter values in a parameter file:
#USE_SECTIONS
$$source=customer_table
[GLOBAL]
$$location=USA
$$sourceConnection=Oracle
[Default].[Sales].[Task1]
$$source=Leads_table
[Default].[Sales].[Taskflow2]
$$source=Revenue
$$sourceconnection=ODBC_1
[Default].[Taskflow3]
$$source=Revenue
$$sourceconnection=Oracle_DB
Task1 contains the $$location, $$source, and $$sourceconnection parameters. Taskflow2 and Taskflow3
contain Task1.
When you run Taskflow2, Data Integration uses the following parameter values:
When you run Taskflow3, Data Integration uses the following parameter values:
When you run Task1, Data Integration uses the following parameter values:
For all other tasks that contain the $$source parameter, Data Integration uses the value customer_table.
Parameter files 61
[Global]
$$ff_conn=FF_ja_con
$st=CA
[Default].[Accounts].[April]
$$QParam=SELECT * from con.ACCOUNT where city=LAX
$$city=LAX
$$tarOb=accounts.csv
$$oracleConn=Oracle_Src
$$state=$st
By default, Data Integration uses the following parameter file directory on the Secure Agent machine:
When you use a parameter file in a synchronization task, save the parameter file in the default directory.
For mapping tasks, you can also save the parameter file in one of the following locations:
A local machine
You enter the file name and directory on the Schedule tab when you create the task. Enter the absolute
file path. Alternatively, enter a path relative to a $PM system variable, for example, $PMSessionLogDir/
ParameterFiles.
• $PMRootDir
• $PMTargetFileDir
• $PMSourceFileDir
• $PMLookupFileDir
• $PMCacheDir
• $PMSessionLogDir
• $PMExtProcDir
• $PMTempDir
To find the configured path of a system variable, see the pmrdtm.cfg file located at the following
directory:
<Secure Agent installation directory>\apps\Data_Integration_Server\55.0.<version>\ICS
\main\bin\rdtm
You can also find the configured path of any variable except $PMRootDir in the Data Integration Server
system configuration details in Administrator.
If you do not enter a location, Data Integration uses the default parameter file directory.
62 Chapter 2: Parameters
A cloud platform
You can use a connection stored with Informatica Intelligent Cloud Services. The following table shows
the connection types that you can use and the configuration requirements for each connection type:
Amazon S3 V2 You can use a connection that was created with the following credentials:
- Access Key
- Secret Key
- Region
The S3 bucket must be public.
Azure Data Lake Store Gen2 You can use a connection that was created with the following credentials:
- Account Name
- ClientID
- Client Secret
- Tenant ID
- File System Name
- Directory Path
The storage point must be public.
Google Storage V2 You can use a connection that was created with the following credentials:
- Service Account ID
- Service Account Key
- Project ID
The storage bucket must be public.
Create the connection before you configure the task. You select the connection and file object to use on
the Schedule tab when you create the task.
Data Integration displays the location of the parameter file and the value of each parameter in the job details
after you run the task.
• If a parameter is not defined in a the parameter file, Data Integration uses the value defined in the task.
• If a parameter value is defined more than once in the same section, Data Integration uses the first value.
• If a parameter value is another parameter defined in the file, Data Integration uses the first value of the
variable in the most specific scope. For example, a parameter file contains the following parameter
values:
[GLOBAL]
$$ffconnection=my_ff_conn
$$var2=California
$$var6=$var5
$var5=North
[Default].[folder5].[sales_accounts]
$$var2=$var5
$var5=south
In the task "sales_accounts," the value of "var5" is "south." Since var2 is defined as var5, var2 is also
"south."
• If a task is defined more than once, Data Integration uses the most recent section after processing the file
top-down.
• The value of a parameter is global unless it is present in a section.
Parameter files 63
• Data Integration ignores sections with syntax errors.
When you generate a parameter file template, the file contains the default parameter values from the
mapping on which the task is based. If you do not specify a default value when you create the parameter, the
value for the parameter in the template is blank.
The parameter file template does not contain the following elements:
If you add, edit, or delete parameters in the mapping, download a new parameter file template.
When you define a connection value in a parameter file, the connection type must be the same as the default
connection type in the mapping task. For example, you create a Flat File connection parameter and use it as
the source connection in a mapping. In the mapping task, you provide a flat file default connection. In the
parameter file, you can only override the connection with another flat file connection.
When you override an FTP connection using a parameter, the file local directory must the same.
You cannot use a parameter file to override a lookup with an FTP/SFTP connection.
Note: Some connectors support only cached lookups. To see which type of lookup a connector supports, see
the help for the appropriate connector.
64 Chapter 2: Parameters
3. In the mapping task, define the parameter details:
a. Select a default connection.
b. On the Schedule tab, enter the parameter file directory and parameter file name.
4. In the parameter file, define the connection parameter with the value that you want to use at runtime.
Precede the parameter name with two dollar signs ($$). For example, you have a parameter with the
name ConParam and you want to override it with the connection OracleCon1. You define the runtime
value with the following format:
$$ConParam=OracleCon1
5. If you want to change the connection, update the parameter value in the parameter file.
Note: You cannot override source objects when you read from multiple relational objects or from a file list.
You cannot override target objects if you create a target at run time.
When you define an object parameter in the parameter file, the parameter in the file must have the same
metadata as the default parameter in the mapping task. For example, if you override the source object
ACCOUNT with EMEA_ACCOUNT, both objects must contain the same fields and the same data types for
each field.
2. In the mapping, use the object parameter at the object that you want to override.
3. In the mapping task, define the parameter details:
a. Set the type to Single.
b. Select a default data object.
c. On the Schedule tab, enter the parameter file directory and file name.
4. In the parameter file, specify the object to use at runtime.
Precede the parameter name with two dollar signs ($$). For example, you have a parameter with the
name ObjParam1 and you want to override it with the data object SourceTable. You define the runtime
value with the following format:
$$ObjParam1=SourceTable
5. If you want to change the object, update the parameter value in the parameter file.
When you define an SQL query, the fields in the overridden query must be the same as the fields in the default
query. The task fails if the query in the parameter file contains fewer fields or is invalid.
Parameter files 65
If a filter condition parameter is not resolved in the parameter file, Data Integration will use the parameter as
the filter value and the task returns zero rows.
66 Chapter 2: Parameters
Chapter 3
CLAIRE recommendations
If your organization has enabled CLAIRE recommendations, you can receive recommendations during
mapping design. CLAIRE, Informatica's AI engine, uses machine learning to make recommendations based
on the current flow of the mapping and metadata from prior mappings across Informatica Intelligent Cloud
Services organizations.
When your organization opts in to receive CLAIRE-based recommendations, anonymous metadata from your
organization's mappings is analyzed and leveraged to offer design recommendations.
To disable recommendations for the current mapping, use the recommendation toggle. You can enable
recommendations again at any time.
When you create a new mapping, recommendations are enabled by default. If you edit an existing mapping,
recommendations are disabled by default.
CLAIRE can make the following types of recommendations during mapping design:
When CLAIRE detects a transformation to add, the Add Transformation icon displays orange on the
transformation link as shown in the following image:
67
Click the Add Transformation icon to display the Add Transformation menu. Recommended transformations
are listed at the top of the menu with the most confident recommendation first.
The following image shows the Add Transformation menu with recommended transformations at the top:
Select a transformation from the menu to add it to the mapping in the current location.
Source recommendations
CLAIRE might recommend additional source objects when the Source transformation in a mapping uses an
Amazon Redshift, Oracle, or Snowflake connection.
To receive CLAIRE recommendations for additional source objects belonging to a connection, the connection
needs to be scanned for metadata and data. For more information about metadata extraction and data
profiling using Metadata Command Center, see the Metadata Command Center help
CLAIRE makes recommendations for additional source objects based on primary key and foreign key
relationships. The recommendations can be helpful when you want to use multiple source objects and you
have many tables on the data source to search through.
For example, you want to find a list of customers together with the type of car each customer has ordered.
For the Source transformation in your mapping, you use a connection to an Oracle database that contains
hundreds of tables. You select a customer table for the source object. In the CLAIRE Recommendations tab,
CLAIRE suggests several tables that can be joined to the customer table. One of the tables contains
customer order data. You add the table to the mapping as an additional Source transformation.
When a recommendation is available, Data Integration highlights the Recommendations tab. Select the
Recommendations tab to see the recommendations.
Open the Source transformation and click the Fields tab to review the source fields in the source object.
If you want to use the source, in the Recommendations tab, click the Accept icon. In the mapping canvas,
connect the Source transformation to the data flow.
If you don't want to use the recommended source, click Decline. Data Integration removes the recommended
Source transformation from the mapping canvas.
For example, CLAIRE might recommend that you add a Data Masking transformation to the mapping to mask
sensitive data.
For more information about using catalog objects in mappings, see Chapter 4, “Data catalog discovery” on
page 70.
Use data catalog discovery to find objects in the catalog that you can use in the following types of Data
Integration assets:
• Mappings. Discover tables, views, and delimited flat files to use as sources, targets, or lookup objects in
new mappings or in mappings that you currently have open in Data Integration.
• Elastic mappings. If you have the appropriate license, you can discover Amazon S3 and Amazon Redshift
objects to use as sources, targets, and lookup objects in new elastic mappings or in elastic mappings that
you currently have open in Data Integration.
• Synchronization tasks. Discover tables, views, and delimited flat files to use as sources in new
synchronization tasks.
• File ingestion tasks. You can discover Amazon S3, Microsoft Azure Blob Storage, and Hadoop Files
objects to use as sources in file ingestion tasks.
Note: Before you can use data catalog discovery, your organization administrator must configure the
Enterprise Data Catalog integration properties on the Organization page in Administrator. For more
information about configuring Enterprise Data Catalog integration properties, see the Administrator help.
70
Performing data catalog discovery
Perform data catalog discovery on the Data Catalog page.
The page displays a Search field and the total number of table, view, and flat file assets in the catalog.
In the Search field, enter a phrase that might occur in the object name, description, or other metadata such as
the data domain or associated business glossary term. When you select an object from the search results,
Data Integration asks you where you want to use the object. Data Integration also imports the connection if it
does not exist in your organization.
• In a new mapping. If you select this option, Data Integration creates a new mapping and adds the object
to the mapping inventory. You can then add the object to the mapping as a source, target, or lookup
object.
• In an open mapping. If you select this option, Data Integration asks you to select the mapping and adds
the object to the mapping inventory. You can then add the object to the mapping as a source, target, or
lookup object.
• In a new synchronization task. If you select this option, Data Integration creates a new synchronization
task and adds the object as the source object.
• In a new file ingestion task. If you select this option, Data Integration creates a new file ingestion task and
adds the object as the source object.
Each mapping has its own inventory. Each object that you add to the mapping inventory stays in the inventory
until you delete it.
To add an object from the inventory as a source, target, or lookup object, select the Source, Target, or Lookup
transformation and click Select an object from the inventory on the tab where you configure the connection.
Then select the object from the inventory.
If you enable CLAIRE recommendations in the Mapping Designer, Data Integration highlights the
Recommendations tab when there is a new recommendation for objects in the inventory. For example, if an
object in the inventory contains sensitive data such as credit card numbers, and you add it to the mapping as
a source, target, or lookup object, Data Integration highlights the Recommendations tab. When you open the
recommendations, CLAIRE might recommend that you add a Data Masking transformation to the mapping to
mask the sensitive data.
You can use the * and ? wildcard characters in the search phrase. For example, to find objects that start with
the string "Cust", enter Cust* in the Search field.
You can also enter keyword searches. For example, if you enter tables with order in the Search field, Data
Integration returns tables with "order" in the name or description, tables that have the associated business
term "order," and tables that contain columns for which the "order" data domain is inferred or assigned.
For more information about Enterprise Data Catalog searches and search results, see the Enterprise Data
Catalog documentation.
The following image shows an example of search results when you enter "tables with order" as the search
phrase:
You can perform the following actions on the search results page:
Use the filters to filter search results by asset type, resource type, resource name, number of rows, data
domains, and date last updated.
Show details.
Catalog search 73
Sort results.
To open an object in Enterprise Data Catalog, click the object name. To view the object, you must log in
to Enterprise Data Catalog with your Enterprise Data Catalog user name and password.
To use the object in a synchronization task, file ingestion task or mapping, click Use Object. You can
select an object if the object is a valid source, target, or lookup type for a mapping or a valid source type
for the task. For example, you can select an Oracle table to use as the source in a new synchronization
task, but you cannot select a Hive table.
When you select the object, Data Integration prompts you to select the task where you want to use the
object and imports the connection if it does not exist.
Connection properties vary based on the object type. Data Integration imports most connection
properties from the resource configuration in Enterprise Data Catalog, but you must enter other required
properties, such as the connection name and password.
After you configure the connection or if the connection already exists, Data Integration adds the object
to a new synchronization task, file ingestion task, or to the inventory of a new or open mapping.
Before you can use data catalog discovery, your organization administrator must configure the Enterprise
Data Catalog integration properties on the Organization page in Administrator.
The following video shows you how to discover and select a catalog object as the source in a new mapping:
You open the Data Catalog page and enter "tables with order" in the search field. The search returns a list of
tables and views that contain “order” in the name or description, tables that have the associated business
term "order," and tables that contain columns for which the "order" data domain is inferred or assigned. The
search results also show details about each object.
You select the table that you were looking for and add it to a new mapping. You then open the Source
transformation in the mapping, click the Source tab, and click Select an object from the inventory. You select
the table from the mapping inventory and click OK. The table becomes the source object.
Visio templates
A Visio template defines parameterized data flow logic that you can use in mapping tasks.
Visio templates are a legacy feature known as integration templates in Informatica Cloud. Visio templates
are not supported in Data Integration by default. If you require the Visio template feature, contact Informatica
Global Customer Support.
As an alternative to Visio templates, Informatica recommends that you take advantage of the mapping and
mapping task templates that are included in Data Integration. These templates include pre-built logic that you
can use for data integration, cleansing, and warehousing tasks. The templates can be used as is or as a head
start in creating a mapping or mapping task to suit your business needs. You can choose the template that
you want to use for a new mapping or mapping task on the New Asset page.
If you use Visio templates, you must complete the following tasks:
1. Configure a Visio template using the Cloud Integration Template Designer, which is a Data Integration
plug-in for Microsoft Visio. Use the Informatica toolbar and Informatica stencil to configure the template.
You can create a new file in the Cloud Integration Template Designer, use a mapping XML file exported
from PowerCenter, or use a synchronization task mapping XML file exported from Data Integration.
When you configure the Visio template, you define the general data flow and configure template
parameters to enable flexibility in how the Visio template can be used.
2. Publish the Visio template using the Cloud Integration Template Designer.
3. Upload the Visio template to your Data Integration organization.
When you upload the Visio template, you define template parameter properties, such as default values
and display properties.
4. Create a mapping task based on the template.
When you configure the mapping task, you define the template parameter values for the Visio template.
You can reuse the logic of an uploaded Visio template by creating multiple mapping tasks and defining
template parameter values differently in each task.
Prerequisites
Working with Visio templates requires the following prerequisites:
76
Configuring a Visio template
Configure a Visio template in the Cloud Integration Template Designer to define flexible, reusable data flow
logic for use in mapping tasks.
A Visio template includes at least one source definition, source qualifier, target definition, and links that
define how data moves between objects. A Visio template can include other Informatica objects, such as a
Lookup object to define lookups or a Joiner object to join heterogeneous sources.
A pipeline consists of a source qualifier and all the objects and targets that receive data from the source
qualifier. You can include one or more pipelines in a Visio template.
When you configure a Visio template, you can configure template parameters. Template parameters are
values that you can define in mapping tasks.
In a Visio template, sources, targets, and lookups are always template parameters. You can create additional
template parameters for other data flow logic, such as filter or join conditions or other expressions.
You can use expression macros in Expression and Aggregator objects. Expression macros allow you to
define dynamic expression logic.
You can also use user-defined parameters in Visio templates and mapping tasks. User-defined parameters
are values that you define in a parameter file. You can update user-defined parameter values without editing
the Visio template or the mapping task. You associate a parameter file with a mapping task.
When you configure a Visio template, you can configure the following:
Create the Visio template Use to create data flow logic entirely in the Cloud Integration Template Designer.
in the Cloud Integration
Template Designer.
Export a mapping from Use when you have a PowerCenter mapping that you want to parameterize and use in
PowerCenter. Informatica Cloud.
To create a Visio template from an existing PowerCenter mapping:
1. In the PowerCenter Designer, export the mapping to XML.
2. In the Cloud Integration Template Designer, use the Create Template from Mapping
XML button in the Informatica toolbar.
3. Configure the template in the Cloud Integration Template Designer.
Export a task from Data Use when you have a Data Integration synchronization task or mapping task that you
Integration. want to expand and parameterize.
To create a Visio template from an existing task:
1. On the Explore page, navigate to the task.
2. In the row for the task, click Actions > Download Mapping XML.
3. In the Cloud Integration Template Designer, use the Create Template from Mapping
XML button in the Informatica toolbar.
4. Configure the template in the Cloud Integration Template Designer.
You can add or update this information in Data Integration when you upload or edit a Visio template.
The following table describes how information in the Visio template displays in the mapping task:
Visio template element in the Cloud Update in Data Integration Display in the Mapping
Integration Template Designer Task Wizard
Image of the Visio template data flow Template XML file that you upload when Definition page.
displays object and link names. you create or edit a Visio template.
Descriptive link names can help explain
data flow logic.
Source object name. Label property on the Visio Template Source connection and object
page. on the Sources page.
Target object name. Label property on the Visio Template Target connection and object
page. on the Targets page.
Template parameter name. Label property on the Visio Template Template parameter label.
page.
Template parameter description in the Template parameter description on the Template parameter tooltip.
Show Parameters dialog box. Visio Template page.
You can define the value of the template parameter when you upload the Visio template to your Data
Integration organization or when you create a mapping task based on the Visio template.
You can create a template parameter for any logical aspect of a data flow. Sources, targets, and lookups are
always template parameters. You can create additional template parameters for other aspects of the data
flow logic. Some template parameters you might want to create include the following:
For example, if you have regional lookup data in different lookup tables, you might create a $lookuptable$
template parameter in the Lookup object that represents the lookup table. When you configure the mapping
task, you select the lookup connection and table that you want to use. You configure a different mapping task
for each regional lookup table.
To create a template parameter in a Visio template, surround the template parameter name with dollar signs
as follows: $<template_parameter_name>$. Template parameter names are case sensitive.
By default, the template parameter name displays as the template parameter label in the mapping task
wizard. However, you can also configure a template parameter label in the template or after you upload the
template.
You can use the Show Parameters button on the Informatica toolbar to see the template parameters defined
using the $<template_parameter_name>$ syntax. If you do not use the template parameter name syntax to
configure source, target, or lookup template parameters, they do not display in the Show Parameters dialog
box.
Use an expression macro to specify repetitive expression statements or complex expressions. Expression
macros apply a common expression pattern across a set of fields, such as adding all or a set of fields
together, checking if fields contain null values, or converting dates to a different format.
You can use an expression macro to generate output fields and variable fields.
• Macro variable declaration. When you declare the macro variable, you declare the macro variable name
and the fields to be used in the macro expression. Use the link rule format to declare the fields. As with
link rules, you can list actual field names or use a link rule, such as All Ports or Pattern.
Use the following syntax for the macro variable declaration:
Output field names Expression that defines the field names for the output fields generated by the
expression macro.
You can use variables and rules to help define output field names, such as <%field
%>_out.
For example, you might use the following expression macro to check if any of the address fields that start
with "addr" are null. The ISNULL output port will be set to a value 1 or higher if one or more fields are null:
Macro variable name: Declare_%addressports%
Macro variable fields: {"addrport":"Pattern:^addr"}
Output field names: ISNULL
Macro expression: %OPR_SUM[IIF(ISNULL(%addrport%),1,0)]%
For example, you might use a template parameter in the following macro variable declaration to define the
fields to be used in the expression macro:
Macro variable name: Declare_%input%
Macro variable fields: {"inputfields":"$salesdata$"}
When you configure the mapping task, $salesdata$ displays as a template parameter. The fields that you
define for the template parameter are expanded where you use the %inputfields% variable in the Expression
object.
For example, if you know that you want to use all fields that begin with SALES_, you might declare the macro
variable fields as follows:
Macro variable name: Declare_%salesfields%
Macro variable fields: {"SalesFields":"Pattern:^SALES_"}
Or, if you know that you want to use all input fields, you might use the following expression:
Macro variable name: Declare_%salesfields%
Macro variable fields: {"SalesFields":"All Ports"}
For more information about patterns, see the Mapping Architect for Visio documentation.
A vertical expansion performs the same calculation on multiple fields by generating multiple expressions. To
use a vertical expansion, configure a macro input field that represents multiple incoming fields. When the
task runs, the application performs the same calculations on each field that the macro input field represents.
Uses the CONCAT function and expands an expression in an expression macro to concatenate multiple
fields. %OPR_CONCAT% creates calculations similar to the following expression:
FieldA || FieldB || FieldC...
%OPR_CONCATDELIM%
Uses the CONCAT function and expands an expression in an expression macro to concatenate multiple
fields, and adds a comma delimiter. %OPR_CONCATDELIM% creates calculations similar to the following
expression:
FieldA || ", " || FieldB || ", " || FieldC...
%OPR_IIF%
Uses the IIF function and expands an expression in an expression macro to evaluate a set of IIF
statements. %OPR_IIF% creates calculations similar to the following expression:
IIF(<field> >= <constantA>, <constant1>,
IIF(<field> >= <constantB>, <constant2>,
IIF(<field> >= <constantC>, <constant3>, 'out of range')))
%OPR_SUM%
Uses the SUM function and expands an expression in an expression macro to return the sum of all fields.
%OPR_SUM% creates calculations similar to the following expression:
FieldA + FieldB + FieldC...
For example, the following expression checks if any of the fields are null. If a field is null, it sets the Isnull
field to a positive number:
Isnull=%OPR_SUM{IIF(ISNULL(%fields%),1,0]%
When expanded, the expression macro generates the following expression, and expands the expression to
include all fields defined by the %fields% variable.
Isnull=IIF(ISNULL (fieldA, 1,0) + IIF(ISNULL(fieldB, 1, 0)...
When you configure an expression macro, use one row for the macro variable declaration and another for the
macro statement. Enter expression macro elements in the Port Name and Expression columns as follows.
Data type and port type information is not relevant:
Use a parameter file to define values that you want to update without having to edit the Visio template or the
mapping task. For example, you might use a user-defined parameter for a sales quota that changes quarterly.
Or, you might configure a task to update user-defined parameter values in the parameter file at the end of the
job, so the next time the job runs, it uses the new values.
You can include user-defined parameters for multiple Visio templates or mapping tasks in a single parameter
file. You can also use multiple parameter files for different Visio templates or tasks. The mapping task reads
the parameter file before a task runs to determine the start values for the user-defined parameters used in
the task.
User-defined parameter values are treated as String values. When you use a user-defined parameter in an
expression, use the appropriate function to convert the value to the necessary datatype. For example, you
might use the following expression to define a quarterly bonus for employees:
IIF((EMP_SALES < TO_INTEGER($$SalesQuota), 200, 0)
To use a parameter file, perform the following steps:
For source qualifier objects, you can configure object-level session properties such as a SQL query override
or pipeline partitioning attributes. Target objects allow different object-level session properties based on
target type, such as null characters or delimiters for flat file targets or target load type for database targets.
Configure object-level session properties in the Session Properties field on the Properties tab of a source
qualfier or target object. Use XML to configure the session properties that you want to use. Use the following
syntax:
<attribute name="<session property name>" value="<value>"/>
For example, you can use the following XML to define target properties in a target object:
<attribute name ="Append if Exists" value ="YES"/>
<attribute name ="Create Directory if Not Exists" value ="YES"/>
<attribute name ="Header Options" value ="No Header"/>
To define partition properties, use a slightly different format. For example, to define read partitions for a
database table, you could enter the following XML in the the source qualifier object Session Properties field:
<partition name="Partition1"/>
<partition name="Partition2"/>
<partition name="Partition3"/>
<partitionPoint type="KEY_RANGE">
<ppField name="field1">
<range min="10" max="20" />
<range min="21" max="30" />
<range min="31" max="40" />
</ppField>
</partitionPoint>
Note: XML is case sensitive. Also, unlike in PowerCenter, use an underscore for the KEY_RANGE option.
Visit the Informatica Cloud Community for additional details and examples. You can browse or search for
"session properties".
Optional objects
You can configure objects in a Visio template data flow as optional. When data is not passed to an optional
object in a mapping task, the object is not included in the final data flow for the task.
You can configure any object as optional except source or source qualifier objects.
When you use an optional object, make sure the data flow is still valid if the object is not included. If the data
flow is not valid without the optional object, errors can occur when the task runs.
To configure an object as optional, on the Properties page of the object details dialog box, set the Optional
property to True.
• Enable macros in Microsoft Visio to enable full functionality for the Cloud Integration Template Designer.
• For Mapping Architect for Visio 2010, the Informatica toolbar displays on the Add-Ins tab.
• Use the Show Parameters icon on the Informatica toolbar to see the template parameters declared in a
Visio templates that use the $<template_parameter_name>$ syntax.
• Use the Validate Mapping Template icon on the Informatica toolbar to perform basic validation.
• You can cut and paste objects within a Visio template. However, you might encounter errors if you try to
copy and paste across different Visio templates.
• When you configure a link, Mapping Architect for Visio indicates that the link is connected to an object by
highlighting the object. Connect both sides of the link.
• The Mapping Architect for Visio Dictionary link rule is not supported at this time.
• For general information about using Mapping Architect for Visio, see the Mapping Architect for Visio
documentation. The Mapping Architect for Visio documentation is available in the Informatica Cloud
Developer Community: https://network.informatica.com/docs/DOC-15318.
• The Declare Mapping Parameters and Variables icon on the Informatica toolbar.
• The <template_name>_<param>.xml file created when you publish a template. Do not use this file.
• To avoid confusion when defining values for a template parameter, use a logical name for each template
parameter and use a unique name for each template parameter in a template.
• When you parameterize a user-defined join, use the fully-qualified name in the parameter value.
• You can use the Show Parameters icon on the Informatica toolbar to view all template parameters in the
file.
• Template parameter names and values are case-sensitive unless otherwise noted.
• Rename sources, targets, lookups, and link rules in the Visio template to provide meaningful names.
Sources, targets, and lookups become template parameters automatically in an uploaded Visio template.
• You can include the following PowerCenter objects in a Visio template:
- Aggregator
- BAPI/RFC transformation
- Expression
- Joiner
- Lookup
- Mapplets
- Normalizer
- Rank
- Router
- Source
- Source Qualifier
- Sorter
- Target
- Transaction Control
- Union
- Update Strategy
- XML transformation
- Java transformation
- SQL transformation
• Rename sources, targets, lookups, and link rules in the Visio template to provide meaningful names.
Sources, targets, and lookups become template parameters automatically in an uploaded Visio template.
When you publish a Visio template, the Cloud Integration Template Designer creates a Visio template XML
file that you can upload to Data Integration.
Before you publish a Visio template, you can perform the following optional tasks:
• Validate the template. You can use the Validate Mapping Template icon on the Informatica toolbar to
perform basic validation.
• Create an image file. You can save the template as .JPG or .PNG to create an image file. You can use the
image file to represent the template data flow when you upload the Visio template to your organization.
Logical connections
A logical connection is a name used to represent a shared connection.
Use a logical connection when you want to use the same connection for more than one connection template
parameter in a Visio template. To use a logical connection, enter the same logical connection name for all
template parameters that you want to use the same connection.
When you use logical connections for connections that display on a single page of the mapping task wizard,
the wizard displays a single connection for the user to select.
For example, you might use a logical connection when a Visio template includes two sources that should
reside in the same source system. To ensure that the task developer selects a single connection for both
sources, when you upload the Visio template, you can enter "Sources" as the logical connection name for
both source template parameters. When you create a mapping task, the mapping task wizard displays one
source connection template parameter named Sources and two source object template parameters.
When you use a logical connections for connections that display on different pages of the mapping task
wizard, the wizard uses the logical connection name for the connection template parameters. If a logical
connection appears on the Other Parameters page, it displays in the Shared Connection Details section. Since
the requirement to use the same connection for all logical connections is less obvious, you might configure a
lookup to display on the same wizard page as the other logical connection, or use descriptions for each
logical template parameter to create tooltips to guide the task developer.
You can use any string value for the logical connection name.
When you configure an input control option for a template parameter, select a logical input control option for
the template parameter type.
For example, for a filter template parameter, you could use a condition, expression, or text box input control,
but the condition input control would best indicate the type of information that the template parameter
requires. The field or field mapping input controls would not allow the task developer to enter the appropriate
information for a filter template parameter.
Condition Use to create a boolean condition that resolves to True or Filter expressions defined in a Filter
False. object, conditions used in a Router
Displays a Data Filter dialog box that allows you to create a object, and other boolean
simple or advanced data filter. expressions.
Expression Use to create simple or complex expressions. Expressions in the Expression and
Displays a Field Expression dialog box with a list of source Aggregator objects.
fields, functions, and operators.
Field Use to select a single source or lookup field. Field selection for a lookup condition
Displays a list of fields and allows you to select a single or other expressions. Or for a link
field. rule to propagate specific fields.
Field Use to map more than one field. Define a set of field level mappings
Mapping Displays a field mapping input control like on the Field between sources, lookups, mapplets,
Mapping page of the synchronization task wizard. and targets.
Allows you to map available fields from upstream sources, To allow aggregate functions in
lookups, and mapplets to downstream mapplets or targets. expressions, enable the Aggregate
Functions option.
Defines whether you can use aggregate functions in
expressions.
Custom Use to provide a list of options. Define a set of options for possible
Dropdown Displays a drop down menu with options that you configure selection. Does not display the
when you upload the Visio template. values that the options represent.
When you define the options, you create a display label and a
value for the label. In the mapping task wizard, the label
displays. The value does not display.
When you upload a Visio template, the defined source and target parameters appear on the Sources or
Targets steps by default. You can move other connection template parameters to these steps. All other
template parameters appear in the Other Parameters step by default.
You can create steps in the mapping task wizard to group similar parameters together. For example, you can
group the field mapping parameters into one step, and the filter parameters into another step.
You can also create steps in the mapping task wizard to order logically dependent parameters. The task
developer must then configure parameters in a certain order. For example, a Visio template includes a
parameterized mapplet and a field mapping. Configure the mapplet parameter to appear on a step before the
• General
• Performance
• Advanced
• Error handling
If you configure advanced session properties for a task and the task is based on an elastic mapping, the
advanced session properties are different.
General options
The following table describes the general options:
Session Log File Name for the session log. Use any valid file name. You can use the following variables as part
Name of the session log name:
- $CurrentTaskName. Replaced with the task name.
- $CurrentTime. Replaced with the current time.
Session Log File Directory where the session log is saved. Use a directory local to the Secure Agent to run the
Directory task.
By default, the session log is saved to the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/logs
Source File Directory Source file directory path. Use for flat file connections only.
Target File Directory Target file directory path. Use for flat file connections only.
Treat Source Rows When the task reads source data, it marks each row with an indicator that specifies the target
as operation to perform when the row reaches the target. Use one of the following options:
- Insert. All rows are marked for insert into the target.
- Update. All rows are marked for update in the target.
- Delete. All rows are marked for delete from the target.
- Data Driven. The task uses the Update Strategy object in the data flow to mark the
operation for each source row.
Commit Type Commit type to use. Use one of the following options.
- Source. The task performs commits based on the number of source rows.
- Target. The task performs commits based on the number of target rows.
- User Defined. The task performs commits based on the commit logic defined in the Visio
template.
When you do not configure a commit type, the task performs a target commit.
Rollback Rolls back the transaction at the next commit point when the task encounters a non-fatal
Transactions on error.
Errors When the task encounters a transformation error, it rolls back the transaction if the error
occurs after the effective transaction generator for the target.
Performance Description
settings
DTM Buffer Size Amount of memory allocated to the task from the DTM process.
By default, a minimum of 12 MB is allocated to the buffer at run time.
Use one of the following options:
- Auto. Enter Auto to use automatic memory settings. When you use Auto, configure Maximum
Memory Allowed for Auto Memory Attributes.
- A numeric value. Enter the numeric value that you want to use. The default unit of measure
is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For
example, 512MB.
You might increase the DTM buffer size in the following circumstances:
- When a task contains large amounts of character data, increase the DTM buffer size to 24
MB.
- When a task contains n partitions, increase the DTM buffer size to at least n times the value
for the task with one partition.
- When a source contains a large binary object with a precision larger than the allocated DTM
buffer size, increase the DTM buffer size so that the task does not fail.
Reinitialize Overwrites existing aggregate files for a task that performs incremental aggregation.
Aggregate Cache
Session Retry on The task retries a write on the target when a deadlock occurs.
Deadlock
Create Temporary Allows the task to create temporary view objects in the database when it pushes the task to
View the database.
Use when the task includes an SQL override in the Source Qualifier transformation or Lookup
transformation. You can also use for a task based on a Visio template that includes a lookup
with a lookup source filter.
Create Temporary Allows the task to create temporary sequence objects in the database.
Sequence Use when the task is based on a Visio template that includes a Sequence Generator
transformation.
Enable cross- Enables pushdown optimization for tasks that use source or target objects associated with
schema pushdown different schemas within the same database.
optimization To see if cross-schema pushdown optimization is applicable to the connector you use, see the
help for the relevant connector.
This property is enabled by default.
Allow Pushdown for Indicates that the database user of the active database has read permission on idle databases.
User Incompatible If you indicate that the database user of the active database has read permission on idle
Connections databases, and it does not, the task fails.
If you do not indicate that the database user of the active database has read permission on
idle databases, the task does not push transformation logic to the idle databases.
Session Sort Order Order to use to sort character data for the task.
Advanced options
The following table describes the advanced options:
Cache Lookup() Caches lookup functions in Visio templates with unconnected lookups. Overrides lookup
Function configuration in the template.
By default, the task performs lookups on a row-by-row basis, unless otherwise specified in the
template.
Default Buffer Block Size of buffer blocks used to move data and index caches from sources to targets. By default,
Size the task determines this value at run time.
Use one of the following options:
- Auto. Enter Auto to use automatic memory settings. When you use Auto, configure
Maximum Memory Allowed for Auto Memory Attributes.
- A numeric value. Enter the numeric value that you want to use. The default unit of measure
is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For
example, 512MB.
The task must have enough buffer blocks to initialize. The minimum number of buffer blocks
must be greater than the total number of Source Qualifiers, Normalizers for COBOL sources,
and targets.
The number of buffer blocks in a task = DTM Buffer Size / Buffer Block Size. Default settings
create enough buffer blocks for 83 sources and targets. If the task contains more than 83, you
might need to increase DTM Buffer Size or decrease Default Buffer Block Size.
Line Sequential Number of bytes that the task reads for each line. Increase this setting from the default of
Buffer Length 1024 bytes if source flat file records are larger than 1024 bytes.
Maximum Memory Maximum memory allocated for automatic cache when you configure the task to determine
Allowed for Auto the cache size at run time.
Memory Attributes You enable automatic memory settings by configuring a value for this attribute. Enter a
numeric value. The default unit is bytes. Append KB, MB, or GB to the value to specify a
different unit of measure. For example, 512MB.
If the value is set to zero, the task uses default values for memory attributes that you set to
auto.
Maximum Maximum percentage of memory allocated for automatic cache when you configure the task
Percentage of Total to determine the cache size at run time. If the value is set to zero, the task uses default values
Memory Allowed for for memory attributes that you set to auto.
Auto Memory
Attributes
Additional Restricts the number of pipelines that the task can create concurrently to pre-build lookup
Concurrent Pipelines caches. You can configure this property when the Pre-build Lookup Cache property is enabled
for Lookup Cache for a task or transformation.
Creation When the Pre-build Lookup Cache property is enabled, the task creates a lookup cache before
the Lookup receives the data. If the task has multiple Lookups, the task creates an additional
pipeline for each lookup cache that it builds.
To configure the number of pipelines that the task can create concurrently, select one of the
following options:
- Auto. The task determines the number of pipelines it can create at run time.
- Numeric value. The task can create the specified number of pipelines to create lookup
caches.
Custom Properties Configure custom properties for the task. You can override the custom properties that the
task uses after the job has started. The task also writes the override value of the property to
the session log.
Pre-build Lookup Allows the task to build the lookup cache before the Lookup receives the data. The task can
Cache build multiple lookup cache files at the same time to improve performance.
You can configure this option in a Visio template or in a task. The task uses the task-level
setting if you configure the Lookup option as Auto for a Visio template.
Configure one of the following options:
- Always allowed. The task can build the lookup cache before the Lookup receives the first
source row. The task creates an additional pipeline to build the cache.
- Always disallowed. The task cannot build the lookup cache before the Lookup receives the
first row.
When you use this option, configure the Configure the Additional Concurrent Pipelines for
Lookup Cache Creation property. The task can pre-build the lookup cache if this property is
greater than zero.
DateTime Format Date time format for the task. You can specify seconds, milliseconds, or nanoseconds.
String To specify seconds, enter MM/DD/YYYY HH24:MI:SS.
To specify milliseconds, enter MM/DD/YYYY HH24:MI:SS.MS.
To specify microseconds, enter MM/DD/YYYY HH24:MI:SS.US.
To specify nanoseconds, enter MM/DD/YYYY HH24:MI:SS.NS.
By default, the format specifies microseconds, as follows: MM/DD/YYYY HH24:MI:SS.US.
Error handling
The following table describes the error handling options:
Stop on Errors Indicates how many non-fatal errors the task can encounter before it stops the session. Non-
fatal errors include reader, writer, and DTM errors.
Enter the number of non-fatal errors you want to allow before stopping the session. The task
maintains an independent error count for each source, target, and transformation. If you specify
0, non-fatal errors do not cause the session to stop.
On Stored Determines the behavior when a task based on a Visio template encounters pre-session or post-
Procedure Error session stored procedure errors. Use one of the following options:
- Stop Session. The task stops when errors occur while executing a pre-session or post-session
stored procedure.
- Continue Session. The task continues regardless of errors.
By default, the task stops.
On Pre-Session Determines the behavior when a task that includes pre-session shell commands encounters
Command Task errors. Use one of the following options:
Error - Stop Session. The task stops when errors occur while executing pre-session shell commands.
- Continue Session. The task continues regardless of errors.
By default, the task stops.
On Pre-Post SQL Determines the behavior when a task that includes pre-session or post-session SQL encounters
Error errors:
- Stop Session. The task stops when errors occur while executing pre-session or post-session
SQL.
- Continue. The task continues regardless of errors.
By default, the task stops.
Error Log Type Specifies the type of error log to create. You can specify flat file or no log. Default is none.
You cannot log row errors from XML file sources. You can view the XML source errors in the
session log.
Do not use this property when you use the Pushdown Optimization property.
Error Log File Specifies the directory where errors are logged. By default, the error log file directory is
Directory $PMBadFilesDir\.
Error Log File Specifies error log file name. By default, the error log file name is PMError.log.
Name
Log Row Data Specifies whether or not to log transformation row data. When you enable error logging, the task
logs transformation row data by default. If you disable this property, n/a or -1 appears in
transformation row data fields.
Log Source Row Specifies whether or not to log source row data. By default, the check box is clear and source
Data row data is not logged.
Data Column Delimiter for string type source row data and transformation group row data. By default, the task
Delimiter uses a pipe ( | ) delimiter.
Tip: Verify that you do not use the same delimiter for the row data as the error logging columns.
If you use the same delimiter, you may find it difficult to read the error log file.
Pushdown optimization
You can use pushdown optimization to push transformation logic to source databases or target databases
for execution. Use pushdown optimization when using database resources can improve task performance.
When you run a task configured for pushdown optimization, the task converts the transformation logic to an
SQL query. The task sends the query to the database, and the database executes the query.
The amount of transformation logic that you can push to the database depends on the database,
transformation logic, and task configuration. The task processes all transformation logic that it cannot push
to a database.
Use the Pushdown Optimization advanced session property to configure pushdown optimization for a task.
You cannot configure pushdown optimization for a mapping task that is based on an elastic mapping.
The pushdown optimization functionality varies depending on the support available for the connector. For
more information, see the help for the appropriate connector.
The task analyzes the mapping from source to target until it reaches transformation logic that it cannot
push to the source database.
The task generates and executes a Select statement based on the transformation logic for each
transformation that it can push to the database. Then, the task reads the results of the SQL query and
processes the remaining transformations.
The task analyzes the mapping from target to source or until it reaches transformation logic that it
cannot push to the target database.
The task generates an Insert, Delete, or Update statement based on the transformation logic for each
transformation that it can push to the target database. The task processes the transformation logic up
to the point where it can push the transformation logic to the database. Then, the task executes the
generated SQL on the target database.
The task analyzes the mapping from source to target or until it reaches transformation logic that it
cannot push to the target database.
The task generates and executes SQL statements against the source or target based on the
transformation logic that it can push to the database.
You can use full pushdown optimization when the source and target databases are in the same relational
database management system.
When you run a task with large quantities of data and full pushdown optimization, the database server
must run a long transaction. Consider the following database performance issues when you generate a
long transaction:
To minimize database performance issues for long transactions, consider using source or target
pushdown optimization.
For example, you might use source or target pushdown optimization during the peak hours of the day, but use
full pushdown optimization from midnight until 2 a.m. when database activity is low.
To use the pushdown optimization user-defined parameter, perform the following steps:
1. Configure a parameter file to use the $$PushdownConfig user-defined parameter. Save the file to a
directory local the Secure Agent to run the task.
Use the following format to define the parameter:
$$PushdownConfig=<pushdown optimization type>
For example: $$PushdownConfig=Source.
Configure a different parameter file of the same name for each pushdown type that you want to use.
2. In the task, add the Pushdown Optimization property and select the $$PushdownConfig option.
Display properties determine where and how a template parameter displays in the mapping task wizard.
After you upload a Visio template, you can edit the template. If you select a different template XML file, you
can choose between the following options:
With both options, mapplets and template parameters that are not used in the new file are deleted. New
mapplets and template parameters display in the task wizard.
1. To upload a new Visio template, click New > Components > Visio Templates and then click Create.
To edit a Visio template, on the Explore page, navigate to the Visio template. In the row that contains the
Visio template, click Actions and select Edit.
2. Complete the following template details:
Template XML Visio template XML file to upload. Perform the following steps to upload a Visio template
File XML file:
1. Click Select.
2. Browse to locate and select the file you want to use, and then click OK.
3. If the Visio template includes a mapplet template parameter, click Select to select a mapplet.
Visible Required Determines if the template parameter displays in the mapping task wizard.
Use to hide a template parameter that does not need to be displayed.
Editable Required Determines if the template parameter is editable in the mapping task wizard.
Required Required Determines if a template parameter must be defined in the mapping task
wizard.
Valid Required for Defines the connection type allowed for a connection template parameter.
Connection connection Select a connection type or select All Connection Types.
Types template
parameters
Logical Optional Logical connection name. Use when you want the task developer to use the
Connection same connection for logical connections with the same name.
Enter any string value. Use the same string for logical connections that
should use the same connection.
Connection template parameters only.
Input Control Required for Defines how the task developer can enter information to configure template
string template parameters in the mapping task wizard.
parameters String template parameters only.
Field Optional for A regular expression to limit the fields from the input control.
Filtering condition, Use a colon with the include and exclude statements.
expression, and
field input You can use a combination of include and exclude statements. Include
controls statements take precedence.
Use semicolons or line breaks to separate field names.
Use any valid regular expression syntax.
For example:
Include: *ID$; First_Name
Last_Name
Annual_Revenue
Exclude: DriverID$
Left Title Required for Name for the left table of the field mapping display. The left table can display
field mapping source, mapplet, and lookup fields.
input controls
Left Field Optional for Regular expression to limit the fields that display in the left table of the field
Filtering field mapping mapping display.
input controls Use a colon with the include and exclude statements.
You can use a combination of include and exclude statements. Include
statements take precedence.
Use semicolons or line breaks to separate field names.
Use any valid regular expression syntax.
For example:
Include: *ID$; First_Name
Last_Name
Annual_Revenue
Exclude: DriverID$
Right Title Required for Name for the right table of the field mapping display. The right table can
field mapping display target, mapplet, and lookup fields.
input controls
Right Data Required for Set of fields to display in the right table of the field mapping display:
Provider field mapping - All objects. Shows all fields from all possible right table objects.
input controls - <object name>. Individual object names. You can select a single object for
the right table fields to display.
- Static. A specified set of fields. Allows you to define the fields to display.
Fields Required for List of fields to display on the right table of the field mapping display.
Declaration field mapping List field names and associated datatypes separated by a line break or
input controls semicolon (;) as follows:
<datatype>(<precision>,<scale>)<field name1>; <field
name2>;...
or
<datatype>(<precision>,<scale>)<field name1>
<field name2>
<datatype>(<precision>,<scale>)<field name3>
If you omit the datatype, Data Integration assumes a datatype of String(255).
Available when Static is selected for the Right Data Provider.
Right Field Optional for Regular expression to limit the fields that display in the right table of the field
Filtering field mapping mapping display.
input controls Use a colon with the include and exclude statements.
You can use a combination of include and exclude statements. Include
statements take precedence.
Use semicolons or line breaks to separate field names.
Use any valid regular expression syntax.
For example:
Include: *ID$; First_Name
Last_Name
Annual_Revenue
Exclude: DriverID$
Aggregate Required for Enables the display and use of aggregate functions in the Field Expression
Functions expression and dialog box in the mapping task wizard.
field mapping
input controls
6. To add another step to the mapping task wizard, click New Step.
You can name the step and rename the Parameters step.
7. To order template parameters, use the Move Up and Move Down icons.
You can use the icons to move connection template parameters to the Sources or Targets steps.
You can use the icons to move template parameters from the Parameters step to any new steps.
8. If you want to add advanced session properties, click Add, select the property that you want to use, and
configure the session property value.
9. Click OK.
If you edit a Visio template that is already used by a mapping task, Data Integration performs the following
actions based on the changes that you make:
• When you change the template XML file, Data Integration lists the mapping tasks that use the Visio
template and offers to keep or delete the tasks.
Note: If you choose to delete the tasks, Data Integration deletes all listed tasks immediately. You cannot
undo this action.
• When you change other elements of the Visio template, such as the input control or default value for a
template parameter, Data Integration applies the changes to existing tasks that use the template.
If you have tasks that already use the Visio template, review the listed tasks after making uploadant
changes to the template. If the changes are not compatible with an existing task configuration, the task
may fail at run time.
Note: If you want to make signification changes to the Visio template and already have tasks that use the
existing template, you might want to upload the template again and configure the changes with the newly-
uploaded template.
You cannot delete a Visio template that is used in a mapping task. Before you delete the Visio template,
delete the task or replace the Visio template in the task.
1. Configure the Date to String template in the Cloud Integration Template Designer.
2. Upload the Date to String template to Data Integration.
3. Create a mapping task that is based on the Date to String template.
Use objects in the Informatica stencil to create the data flow in a Visio template. Add and configure objects
and links. When the template is complete, validate and publish the template. Then create an image file. You
will use the image file when you upload the Visio template to your Data Integration organization.
1. To create a new template, click File > New > Custom Visio Template > Create.
2. To save and name the file, click File > Save. Select a local directory and name the file: DateToString.vsd.
VSD is the default file type for the Cloud Integration Template Designer.
3. From the Informatica stencil, add a Source Definition object to the template. Double-click the Source
Definition icon.
4. In the Source Definition Details dialog box, configure the following source definition properties.
To configure a property, select the property you want to configure. In the Property area, enter the value
you want to use, and click Apply.
For sources, use the same template parameter name for the transformation name and source table.
Sources are always template parameters, regardless of whether you enclose the name in dollar signs,
but use dollar signs to clearly indicate it is a template parameter to other users.
5. Add a Source Qualifier object to the data flow, and configure the following property:
Transformation Name SQ
6. Add a Link object between the Source Definition to the Source Qualifier. Double-click the link to configure
link rules.
7. To name the link, in the Link Rules dialog box, in the Rule Set Name field, enter All.
8. To configure a link rule that moves all data from the source to the source qualifier, click New Rule. In the
Define Link Rule dialog box, click Include, click All Ports, and click OK.
9. To save your changes and close the Link Rules dialog box, click OK.
10. To configure the expressions to convert dates to strings, add an Expression object to the data flow. On
the Property tab of the Expression Details dialog box, configure the following property:
11. On the Configuration tab, click New Expression, then configure the following details and click Apply.
Expression iif(IS_DATE(%port%,'$fromdateformat$'),TO_CHAR(TO_DATE(%port
%,'$fromdateformat$'),'$todateformat$'),%port%)
13. To configure a link rule that moves all data from the source qualifier to the Expression object, click New
Rule.
In the Define Link Rule dialog box, click Include, click All Ports, and click OK.
To save the link rule, click OK.
14. Add a Target Definition object to the data flow. Configure the target definition as follows.
15. To configure a link rule that moves data from the Expression object to the target definition, click New
Rule.
In the Define Link Rule dialog box, click Include.
To include all ports that end in "_o", with click Pattern and for Starting Port Pattern, enter "_o$". Click OK.
To save the link rule, click OK.
16. To validate the Visio template, on the Informatica toolbar, click Validate Mapping Template.
17. To save the Visio template, click File > Save.
18. To create an image file, click File > Save As. Save the file as JPEG or PNG.
You can use the Arrange All icon on the Informatica toolbar to arrange the data flow before taking the
screenshot.
19. To publish the Visio template, on the Informatica toolbar, click Publish Template. Navigate to the
directory you want to use, and click Save.
When you upload the Visio template, you can define template parameter descriptions, defaults, and display
options. You can also upload an image file to visually represent the data flow.
1. Click New > Components > Visio Templates and then click Create.
2. On the Visio Template page, configure the following information.
Template Image File Select the JPG file that you created.
After you select the file, the template image file displays.
3. To configure the display properties for the $DBsrc$ template parameter, click Edit.
4. In the Edit Parameters Properties dialog box, configure the following options and click OK.
Visible Select Yes to display the template parameter in the Contact Validation task.
Editable Select Yes to allow the task developer to configure the source connection in the
Contact Validation task.
Valid Connection Type Determines the connection type allowed for the source. You can select a connection
type that exists in your organization or select All Connection Types.
Select Relational Database.
Visible Select Yes to display the template parameter in the Contact Validation task.
Editable Select Yes to allow the task developer to configure the target connection in the
Contact Validation task.
Valid Connection Type Determines the connection type allowed for the source.
Select Flat File or Relational Database.
6. Enter the following description for the $fromdateformat$ template parameter to help the task developer
understand the information to provide: Source data format.
7. To configure the display properties for the $fromdateformat$, click Edit, configure the following options,
and click OK.
Editable Select Yes to allow the task developer to configure the date format.
Input Control Select Text Box. Displays a text box so the user can enter the date format.
8. Enter the following description for the $todateformat$ template parameter to help the user understand
the information to provide: Conversion data format.
9. To configure the display properties for the $todateformat$, click Edit, configure the following options,
and click OK.
Editable Select Yes to allow the task developer to configure the date format.
Input Control Select Text Box. Displays a text box so the user can enter the date format.
10. To upload the template using the configured template parameters, click OK.
1. Click New > Tasks > Mapping Task. and click OK.
2. Configure the task details as follows and click Next.
Location Browse to the folder where you want to store the Date to String mapping task or use the
default location.
Runtime Environment Select the runtime environment that contains the Secure Agent to use to run the task.
Mapping Click Select and browse to the Visio template, and then click Select.
After you select the template, the template image displays.
3. On the Sources page, select a source connection and source object, and click Next.
Based on the template parameter properties, the wizard displays only database connections.
4. On the Targets page, select a target connection and target object, and click Next.
Based on the template parameter properties, the wizard displays file and database connections.
5. On the Input Parameters page, enter the date format that you want to use for $fromdateformat$.
For example: yyyy-mm-dd.
6. Enter the date format that you want to use for $todateformat$ and click Next.
For example: mm-dd-yyyy.
7. On the Schedule page, select schedule details, email notification options, advanced options, and click
Save.
When you run the task, the Secure Agent reads data from the source, converts dates to the new format, and
writes data to the selected target.
$$PushdownConfig
user-defined parameter 98 I
in-out parameters
configuration 51
A overview 46
properties 49
advanced session properties values 50
in Visio templates 99 variable functions 48
Informatica Global Customer Support
contact information 7
C Informatica Intelligent Cloud Services
web site 6
CLAIRE recommendations input control options
mapping design 67 for template parameters 89
Cloud Application Integration community uploading a Visio template 89
URL 6 input parameters
Cloud Developer community in mappings 38
URL 6 using in mappings 45
components integration templates
Visio templates 76 see Visio templates 76
D L
data catalog discovery logical connections
discovering and selecting objects 74 understanding 89
example 75
mapping inventory 72
overview 70
performing 71 M
searching for objects 73 maintenance outages 7
Data flow run order 17 Mapping Designer 9
Data Integration community mapping tasks
URL 6 advanced session properties for mappings 91
data preview advanced session properties for Visio templates 91
monitoring preview jobs 22 configuring template parameter properties 89
previewing mapping data 20 creating from a Visio template 103
steps for previewing data 22 effect of mapping revisions 37
viewing results 23 parameter files and user-defined parameters 83
parameters,in-out 51
pushdown optimization 97
E using parameter files 58
mapping templates 11
Enterprise Data Catalog mappings
discovering and selecting objects 74 configuration 13
integration with Data Integration 70 configuring 14
searching for objects 73 configuring rules and guidelines 16
expression macros data preview 20
in Visio templates 80 data preview results 23
in-out parameter values 50
in-out parameters 46
F input parameters 38
inventory panel 72
flow run order maintenance 36
configuring 19 Mapping Designer overview 9
mapping revisions and mapping tasks 37
109
mappings (continued)
overview 8 T
parameters 38 template parameters
parameters,aggregation type 47 in Visio templates, rules and guidelines 85
parameters,in-out 50, 53, 55 input control options 89
steps for previewing data 22 ordering in the mapping task wizard 90
testing 19 overview 79
tutorial 26 template XML
using parameters 45 downloading for Visio templates 103
validating 19 templates
variable functions for in-out parameters 48 mapping 11
Masking tasks testing mappings 19
using parameter files 58 transformations
data preview 20
data preview results 23
O steps for previewing data 22
trust site
object-level session properties description 7
in Visio templates 84
optional objects
Visio templates 84
U
upgrade notifications 7
P user parameters 58
user-defined parameters
parameter file templates in Visio templates and mapping tasks 83
downloading 64
Parameter file templates 64
parameter files
overview 58 V
with Visio templates and mapping tasks 83 validating mappings 19
parameters Visio templates
aggregation type 47 configuring logical connections 89
configuration 43 configuring overview 77
guidelines 50 creating a mapping task from a Visio template 103
in mapping tasks 51 deleting 103
in mappings 38 display in mapping tasks 78
in-out 46 downloading template XML 103
in-out parameter properties 49 example 103
in-out parameter values 50 example:configure and publish the Visio template 104
in-out parameters 53, 55 example:uploading the Visio template 106
parameter types 41 input control options 89
partial 44 methods to create 77
user defined 58 optional objects 84
variable functions 48 overview 76
PowerCenter mapping XML parameter display customization 90
tips for creating Visio templates from 85 parameter files and user-defined parameters 83
pushdown optimization prerequisites 76
mapping tasks 97 publishing 88
types 97 revising 102
user-defined parameter 98 rules and guidelines for configuring 85
steps to upload 99
template parameter usage 79
S uploading, configuring template parameter properties 89
session properties
object level for Visio templates 84
status W
Informatica Intelligent Cloud Services 7 web site 6
synchronization tasks
using parameter files 58
system status 7
110 Index