Etl Faq
Etl Faq
Etl Faq
ET E
What are the selection criteria for Data Transformation? The selection of Data is based on the following criteria; Separation/ Concatenation Normalization/ De-normalization Aggregation Conversion Algorithmic Conversion Conversion by Look up Enrichment
What are the Different types of tables in Data Warehouse? The different types of tables in Data Warehouse are; Dimension Fact Aggregate
Page 1 of 20
Informatica FAQs
What is the Sequence of Loading in Data warehouse? The Sequence of loading in Data Warehouse is as follows; Dimensions should be loaded first as all facts data depends on these dimension tables. After Dimensions tables are loaded the fact tables to be loaded next as the aggregation depends on the fact data. Aggregate tables can be loaded after facts are loaded.
What is the ETL Toolkit provided in Oracle 9i? Oracle 9I introduces several new features, which will change the way the data is loaded, and transformation is done. The venerable SQL *Loader utility in Oracle 9I meets powerful new features for extraction, transformation and loading. The ETL features which are interesting in Oracle 9I; External Tables Table Functions Merge Statement
Does Oracle 9i support pipelining in ETL operations? Yes. Oracle 9i data load can include multi table insert and upsert semantics. Oracle9i provides support for external tables, that quickly load the data into your database and a new data capture facility will allow incremental changes from target sources to be captured and applied to the data warehouse automatically. What are external tables? How to fetch flat file data from db without loading it? Oracle9i allows you read-only access to data in external tables. External tables are defined as tables that do not reside in the database, and can be in any format for which an access driver is provided. The CREATE TABLE ...ORGANIZATION EXTERNAL statement specifies metadata describing the external tables. What are Table Functions in Oracle 9i? A table function takes a set of row as input and returns a collection of records. Before creating the table function, some types need to be created. Table functions always returns a collection of records, so to begin, a table type is created that corresponds to the definition of the ultimate destination table. What are Merge Statements in Oracle 9i? The merge statement solves the long-standing problem of reloading data that were loaded previously. Prior to 9I procedural code were to be written to detect whether a row existed and based on that issue an Update or Insert operation. In Oracle 9I simply using the merge statement and let the database handle these details. What are the advantages of ETL implementation by writing a custom program? The advantages are The program is not that complex. Programmers are easily available.
Page 2 of 20
Informatica FAQs
What are the disadvantages of ETL implementation by writing a custom program? The Disadvantages are The functions to be supported by individual Data Marts cannot be predicted in advance. In a typical data mart most of the functionality are defined by end user after the data marts move into production. So the ETL programs have to be continually modified and so lot of rework. Metadata is not generated automatically in this process and so it is difficult to integrate data marts across the organizations. Hand coded ETL programs are likely to have a slower speed of execution and are typically single threaded.
What are the commonly used techniques for implementing Data Transformation? The commonly used techniques are; Transformation Flow Transformations provided by SQL*Loader Transformations using SQL and PL/SQL Data Substitution Key Lookups Pivoting
What are the criteria for selecting an Extraction, Transformation and Loading tool? ETL tool selection is the most important decision that has to be made when choosing the components of a data warehousing application. The ETL tool operates at the heart of the data warehouse, extracting data from multiple data sources, transforming the data to make it accessible to business analysis, and loading multiple target databases. There are two options for implementing an extraction, transformation, and loading processes: A custom program in COBOL, C, or PL/SQL to extract data from multiple source files, transform the data, and load the target databases. Purchase an off-the-shelf extraction/transformation/loading (ETL) tool.
Page 3 of 20
Informatica FAQs
Simple to complex Questions in informatica and d/w: What are the broad components of informatica (designer, wf manager, wf monitor, server manager) What are various transformations? How many kinds of ports are there? What are variable ports? How is joiner different from a SQ? What is a normalizer? How do u define external variables in a mapping ($$ variables)? In ver 6 can $$ variables be used in lookup? How can you generate sequence no. thru informatica ? If you are familier with ver6, can you tell about the upgrade-features? Usage of target override: how do I affect multiple rows in a target? What is a lookup? What is lkp override. Why do we use it? Why do we use lookup cache? How different is it, if we dont use it? What are connected and unconnected lookups, and why do we use them.. under which circumstances.. Syntax for unconnected lookup reference. What is sq-override. What are the commands in update strategy? (Dd_insert, udate and delete) what are pre-sql and post-sql queries in source qualifier.. when do we use router transformation? What is star schema? What are fact and dimension tables in d/w? How is d/w different from OLTP? How can you make a session appear successful, inspite of a task within it failing? What will happen if you increase the commit_interval in session.? In ver 6. If I want the informatica to behave as per the instructions in update strategy, what needs to Be taken care in the properties? (ans: treat the source rows as data driven) Difference between active and passive transformations. Qs on reusable components, mapplets etc. What are IPF files ? What is target load plan.. or ask similar question stressing on this..
Page 4 of 20
Informatica FAQs
To reuse the same mapping to extract records for other customer accounts, you can enter a new value for the parameter in the parameter file and run the session. Or you can create a parameter file for each customer account and start the session with a different parameter file each time using pmcmd. By using a parameter file, you reduce the overhead of creating multiple mappings and sessions to extract transaction records for different customer accounts.
Page 5 of 20
Informatica FAQs
using the same validation rules for workflows. The Workflow Manager validates the worklet instance by verifying attributes in the Parameter tab of the worklet instance. When you use standard validation, the Workflow Manager does not validate nested worklets or nonreusable worklets and sessions you have not edited.
select a worklet to validate the worklet and all other worklets nested under it. To validate a worklet and its nested worklets, right-click the worklet and choose Extended Validate. When you use extended validation, the Workflow Manager validates all tasks instances and nested worklets, regardless of whether you have edited them.
If you cannot free up memory, you might want to add memory to the system.
You can import a lookup table from the mapping source or target database, or you can import a lookup table from any database that both the Informatica Server and Client machine can connect to. If your mapping includes multiple sources or targets, you can use any of the mapping sources or mapping targets as the lookup table. The lookup table can be a single table, or you can join multiple tables in the same database using a lookup SQL override. The Informatica Server queries the lookup table or an in-memory cache of the table for all incoming rows into the Lookup transformation. Connect to the database to import the lookup table definition. The Informatica Sever can connect to a lookup table using a native database driver or an ODBC driver. However, the native database drivers improve session performance.
Page 6 of 20
Informatica FAQs
o o o o o o
Normalizer transformations COBOL sources XML Source Qualifier transformations XML sources Target definitions Other mapplets
DATE_COMPARE (date1, date2) Returns an integer indicating which of two dates is earlier. Note that DATE_COMPARE returns an integer value rather than a date value.
Page 7 of 20
Informatica FAQs
Return Value -1 if the first date is earlier. 0 if the two dates are equal. 1 if the second date is earlier. NULL if one of the date values is NULL.
DATE_DIFF (date1, date2, format) Returns the length of time, measured in the increment you specify (years, months, days, hours, minutes, or seconds), between two dates. The Informatica Server subtracts the second date from the first date and returns the difference. Return Value
Double value. If date1 is later than date2, the return value is a positive number. If date1 is earlier than date2, the return value is a negative number. Zero if the dates are the same. NULL if one (or both) of the date values is NULL.
GET_DATE_PART (date, format) Returns the specified part of a date as an integer value. Therefore, if you create an expression that returns the month portion of the date, and pass a date such as Apr 1 1997 00:00:00, GET_DATE_PART returns 4. Return Value Integer representing the specified part of the date. NULL if a value passed to the function is NULL.
LAST_DAY (date) Returns the date of the last day of the month for each date in a port. Return Value Null If a value is NULL, LAST_DAY ignores the row. However, if all values passed from the port are NULL, LAST_DAY returns NULL. Group By LAST_DAY groups values based on group by ports you define in the transformation, returning one result for each group. If there is no group by port, LAST_DAY treats all rows as one group, returning one value. MAX (date [, filter_condition]) Returns the latest date found within a port or group. You can apply a filter to limit the rows in the search. You can nest only one other aggregate function within MAX. MAX is one of several aggregate functions. You use aggregate functions in Aggregator transformations only. You can also use MAX to return the largest numeric value in a port or group. Date. The last day of the month for that date value you pass to this function. NULL if a value in the selected port is NULL.
Page 8 of 20
Informatica FAQs
Return Value Date. & If NULL if all values passed to the function are NULL, or if no rows are selected (for example, the filter condition evaluates to FALSE or NULL for all rows). MIN (date [, filter_condition]) Returns the oldest date found in a port or group. You can apply a filter to limit the rows in the search. You can nest only one other aggregate function within MIN, and the nested function must return a date datatype. MIN is one of several aggregate functions. You use aggregate functions in Aggregator transformations only. You can also use MIN to return the minimum numeric value in a port or group. Return Value
Date if the value argument is a date. NULL if all values passed to the function are NULL, or if no rows are selected (for example, the filter condition evaluates to FALSE or NULL for all rows). Nulls If a single value is NULL, MIN ignores it. However, if all values passed from the port are NULL, MIN returns NULL. Group By MIN groups values based on group by ports you define in the transformation, returning one result for each group. If there is no group by port, MIN treats all rows as one group, returning one value. ROUND (date [, format]) Rounds one part of a date. You can also use ROUND to round numbers. Return Value Date with the specified part rounded. ROUND returns a date in the same format as the source date. You can link the results of this function to any port with a Date/Time datatype. NULL if you pass a null value to the function. SET_DATE_PART (date, format, value) Sets one part of a date/time value to a value you specify. Return Value Date in the same format as the source date with the specified part changed. NULL if a value passed to the function is NULL.
TRUNC (date [, format]) Return Value Date. NULL if a value passed to the function is NULL.
Page 9 of 20
Informatica FAQs
The Informatica Server creates pre-defined workflow variables each time you create a new task. You create userdefined workflow variables when you create a workflow. You can use workflow variables when you configure the following types of tasks: Assignment tasks Decision Task Links You can use an Assignment task to assign a value to a user-defined workflow variable. For example, you can increment a user-defined counter variable by setting the variable to its current value plus 1. Decision tasks determine how the Informatica Server executes a workflow. For example, you can use the Status variable to run a second session only if the first session completes successfully. Links connect each workflow task. You can use workflow variables in links to create branches in the workflow. For example, after a Decision task, you can create one link to follow when the decision condition evaluates to true, and another link to follow when the decision condition evaluates to false. Use links to connect each workflow task. You can specify conditions with links to create branches in the workflow. The Workflow Manager does not allow you to use links to create loops in the workflow. Each link in the workflow can execute only once. Timer tasks specify when the Informatica Server begins to execute the next task in the workflow. You can use a user-defined date/time variable to specify the exact time the Informatica Server starts to execute the next task.
Timer Tasks
Pre-Defined Workflow Variables The Workflow Manager creates a set of pre-defined variables for every workflow. There are two types:
Task-specific variables. The Workflow Manager creates a set of task-specific variables for each task in the workflow. You can use task-specific variables to represent information such as the time a task ended, the number of rows written to a target in a session, or the result of a Decision task. The Workflow Manager lists task-specific variables under the task name in the Expression Editor.
System variables. You can use the SYSDATE and WORKFLOWSTARTTIME system variables within a workflow. The Workflow Manager lists system variables under the Built-in node in the Expression Editor. You can use pre-defined variables within a workflow. You cannot modify or delete pre-defined workflow variables. The Task-Specific Workflow variables are, Pre-defined Workflow Variable Condition EndTime ErrorCode ErrorMsg FirstErrorCode FirstErrorMsg Description Evaluation result of decision condition expression. If the task fails, the Workflow Manager keeps the condition set to null. Date and time the associated task ended. Last error code for the associated task. If there is no error, the Informatica Server sets ErrorCode to 0 when the task completes. Last error message for the associated task. If there is no error, the Informatica Server sets ErrorMsg to an empty string when the task completes. Error code for the first error message in the session. If there is no error, the Informatica Server sets FirstErrorCode to 0 when the session completes. The first error message in the session. If there is no error, the Informatica Server sets FirstErrorMsg to Datatype Integer Date/Time Integer Nstring Integer Nstring
Page 10 of 20
Informatica FAQs
PrevTaskStatus
an empty string when the task completes. Status of the task that the Workflow Manager executes immediately before the current task. If the previous task succeeded, the Workflow Manager sets PrevTaskStatus to SUCCEEDED. Otherwise, it sets PrevTaskStatus to FAILED. For more information. Total number of rows read from the sources that failed. Total number of rows successfully read from the sources. Date and time the associated task started. Execution status. Task statuses include: ABORTED DISABLED FAILED NOTSTARTED STARTED STOPPED
Integer
TgtFailedRows
TgtSuccessRows TotalTransErrors
SUCCEEDED Total number of rows that the targets rejected. Total number of rows successfully written to the targets. Total Numbers of Transformations Errors
Decision
The Decision task allows you to enter a condition that determines the execution of the workflow, similar to a link condition. The Decision task has a pre-defined variable called $Decision_task_name. Condition that represents the result of the decision condition. The Informatica Server evaluates the condition in the Decision Task and sets the pre-defined condition variable to True (1) or False (0). You can specify one decision condition per Decision task. After the Informatica Server evaluates the Decision task, you can use the pre-defined condition variable in other expressions in the workflow to help you develop the workflow. Depending on the workflow, you might use link conditions instead of a Decision task. If you do not specify a condition in the Decision task, the Informatica Server evaluates the Decision task to true.
Assignment
The Assignment task allows you to assign a value to a user-defined workflow variable. To use an Assignment task in the workflow, first create and add the Assignment task to the workflow. Then configure the Assignment task to assign values or expressions to user-defined variables. After you assign a value to a variable using the Assignment task, the Informatica Server uses the assigned value for the variable during the remainder of the workflow. You must create a variable before you can assign values to it. You cannot assign values to pre-defined workflow variables.
Timer
The Timer task allows you to specify the period of time to wait before the Informatica Server executes the next task in the workflow. You can choose to start the next task in the workflow at an exact time and date. You can also choose to wait a period of time after the start time of another task, workflow, or worklet before starting the next task. The Timer task has two types of settings:
Page 11 of 20
Informatica FAQs
Absolute time. You specify the exact time that the Informatica Server starts executing the next task in the workflow. You may specify the exact date and time, or you can choose a user-defined workflow variable to specify the exact time. Relative time. You instruct the Informatica Server to wait for a specified period of time after the Timer task, the parent workflow, or the top-level workflow starts. For example, you may have two sessions in the workflow. You want the Informatica Server wait ten minutes after the first session completes before it executes the second session. Use a Timer task after the first session. In the Relative Time setting of the Timer task, specify ten minutes from the start time of the Timer task.
Control
You can use the Control takes to stop, abort, or fail the top-level workflow or the parent workflow based on an input link condition. A parent workflow or worklet is the workflow or worklet that contains the Control task. Control Option Description Marks the Control task as Failed. The Informatica Server fails the Control task if you choose this option. If you choose Fail Me in the Properties tab and choose Fail Parent If This Task Fails in the General tab, the Informatica Server fails the parent workflow. Marks the status of the workflow or worklet that contains the Control task as Failed after the workflow or worklet completes. Stops the workflow or worklet that contains the Control task.
Abort Parent
Fails the workflow that is running.
Page 12 of 20
Informatica FAQs
Pre-defined event. A pre-defined event is a file-watch event. For pre-defined events, use an Event-Wait task to instruct the Informatica Server to wait for the specified indicator file to appear before continuing with the rest of the workflow. When the Informatica Server locates the indicator file, it starts the next task in the workflow.
User-defined event. A user-defined event is a sequence of tasks in the workflow. Use an Event-Raise task to specify the location of the user-defined event in the workflow. A user-defined event is sequence of tasks in the branch from the Start task leading to the Event-Raise task. When all the tasks in the branch from the Start task to the Event-Raise task complete, the Event-Raise task triggers the event. The Event-Wait task waits for the Event-Raise task to trigger the event before continuing with the rest of the tasks in its branch.
Event-Raise Task
The Event-Wait task waits for an event to occur. Once the event triggers, the Informatica Server continues executing the rest of the workflow.
To use an Event-Raise task, you must first declare the user-defined event. Then, create an Event-Raise task in the workflow to represent the location of the user-defined event you just declared. In the Event-Raise task properties, specify the name of a user-defined event.
Event-Wait Task
Event-Raise task represents a user-defined event. When the Informatica Server executes the Event-Raise task, the Event-Raise task triggers the event. Use the Event-Raise task with the Event-Wait task to define events. The Event-Wait task waits for a pre-defined event or a user-defined event. A pre-defined event is a file-watch event. When you use the Event-Wait task to wait for a pre-defined event, you specify an indicator file for the Informatica Server to watch. The Informatica Server waits for the indicator file to appear. Once the indicator file appears, the Informatica Server continues executing tasks after the Event-Wait task. Do not use the Event-Raise task to trigger the event when you wait for a pre-defined event. You can also use the Event-Wait task to wait for a user-defined event. To use the Event-Wait task for a user-defined event, you specify the name of the user-defined event in the Event-Wait task properties. The Informatica Server waits for the Event-Raise task to trigger the user-defined event. Once the user-defined event is triggered, the Informatica Server continues executing tasks after the Event-Wait task. Waiting for Pre-Defined Events To use a pre-defined event, you need a shell command, script, or batch file to create an indicator file. The file must be created or sent to a directory local to the Informatica Server. The file can be any format recognized by the Informatica Server operating system. You can choose to have the Informatica Server delete the indicator file after it detects the file, or you can manually delete the indicator file. The Informatica Server marks the status of the Event-Wait task as failed if it cannot delete the indicator file. When you specify the indicator file in the Event-Wait task, enter the directory in which the file will appear and the name of the indicator file. You must provide the absolute path for the file. The directory must be local to the Informatica Server. If you only specify the file name and not the directory, the Workflow Manager looks for the indicator file in the system directory. The Informatica Server writes the time the file appears in the workflow log. Note: Do not use a source or target file name as the indicator file name.
Page 13 of 20
Informatica FAQs
The Informatica Server writes data to relational targets, but rolls back the data when the session completes. For all other target types, such as flat file and SAP BW, the Informatica Server does not write data to the targets. Enter the number of source rows you want to test in the Number of Rows to Test field. You cannot perform a test load on sessions using XML sources. Note: You can perform a test load when you configure a session for normal mode. If you configure the session for bulk mode, the session fails.
Incremental Aggregation
Select Incremental Aggregation option if you want the Informatica Server to perform incremental aggregation. Informatica Server Processing for Incremental Aggregation The first time you run a session with incremental aggregation enabled, the Informatica Server processes the entire source. At the end of the session, the Informatica Server stores aggregate data from that session run in two files, the index file and the data file. The Informatica Server creates the files in a local directory. The second time you run the session, use only changes in the source as source data for the session. The Informatica Server then performs the following actions:
For each input record, the Informatica Server checks historical information in the index file for a corresponding group, then: o o If it finds a corresponding group, the Informatica Server performs the aggregate operation incrementally, using the aggregate data for that group, and saves the incremental change. If it does not find a corresponding group, the Informatica Server creates a new group and saves the record data. When writing to the target, the Informatica Server applies the changes to the existing target: Updates modified aggregate groups in the target. Inserts new aggregate data. Deletes removed aggregate data. Ignores unchanged aggregate data. Saves modified aggregate data in the index and data files to be used as historical data the next time you run the session.
o o o o o
Each subsequent time you run the session with incremental aggregation, you use only the incremental source changes in the session. If the source changes significantly, and you want the Informatica Server to continue saving aggregate data for future incremental changes, configure the Informatica Server to overwrite existing aggregate data with new aggregate data. When you partition a session that uses incremental aggregation, the Informatica Server creates one set of cache files for each partition. If you change the partitioning information after you run an incremental aggregation session, the Informatica Server realigns the cache files the next time you run the incremental aggregation session. The Informatica Server creates new aggregate data, instead of using historical data, when you perform one of the following tasks:
Save a new version of the mapping. Select Reinitialize Aggregate Cache in the session property sheet. Move the aggregate files without correcting the configured path or directory for the files in the session property sheet. Change the configured path or directory for the aggregate files in the session property sheet without moving the files to the new location.
Page 14 of 20
Informatica FAQs
Reinitializing the aggregate cache overwrites historical aggregate data with new aggregate data. When you reinitialize the aggregate cache, instead of using the captured changes in source tables, you typically need to use the use the entire source table. You might use this option when source tables change dramatically. After you run a session that reinitializes the aggregate cache, edit the session properties to disable the Reinitialize Aggregate Cache option. If you do not clear Reinitialize Aggregate Cache, the Informatica Server overwrites the aggregate cache each time you run the session. Note: When you move from Windows to UNIX, you must reinitialize the cache. Therefore, you cannot change from a Latin1 code page to an MSLatin1 code page, even though these code pages are compatible. Do not enable incremental aggregation in the following circumstances:
You cannot capture new source data. Processing the incrementally changed source significantly changes the target. Your mapping contains percentile or median functions.
Capturing Incremental Changes Before enabling incremental aggregation, you must capture changes in source data. You might do this by:
Using a filter in the mapping. You may be able to remove pre-existing source data during a session with a filter. Using a stored procedure. You may be able to remove pre-existing source data at the source database with a pre-load stored procedure. Creating File Directory When you run multiple sessions with incremental aggregation, decide where you want the files stored. Then enter the appropriate directory for the server variable, $PMCacheDir, in the Workflow Manager. You can enter sessionspecific directories for the index and data files. However, by using the server variable for all sessions using incremental aggregation, you can easily change the cache directory when necessary by changing $PMCacheDir. Note: Changing the cache directory without moving the files causes the Informatica Server to reinitialize the aggregate cache and gather new aggregate data.
Override Tracing
Overrides tracing levels set on a transformation level. Selecting this option enables a menu from which you choose a tracing level: None, Terse, Normal, Verbose Initialization, or Verbose Data. None Terse Normal Verbose The Informatica Server uses the tracing level set in the mapping. Informatica Server logs initialization information as well as error messages and notification of rejected data. Informatica Server logs initialization and status information, errors encountered, and skipped rows due to transformation row errors. Summarizes session results, but not at the level of individual rows. In addition to normal tracing, Informatica Server logs additional initialization details, names of index
Page 15 of 20
Informatica FAQs
and data files used, and detailed transformation statistics. In addition to verbose initialization tracing, Informatica Server logs each row that passes into the mapping. Also notes where the Informatica Server truncates string data to fit the precision of a column and provides detailed transformation statistics.
You can also enter tracing levels for individual transformations in the mapping. When you enter a tracing level in the session properties, you override tracing levels configured for transformations in the mapping.
Page 16 of 20
Pre-session command. The Informatica Server performs pre-session shell commands at the beginning of a session. Post-session success command. The Informatica Server performs post-session success commands only if the session completed successfully. Post-session failure command. The Informatica Server performs post-session failure commands only if the session failed to complete.
You can configure a session to stop or continue if a pre-session shell command fails. Use the following guidelines to call a shell command:
Use any valid UNIX command or shell script for UNIX servers, or any valid DOS or batch file for Windows servers. Configure the session to execute the pre- or post-session shell commands.
The Workflow Manager provides a task called the Command task that allows you to specify shell commands anywhere in the workflow. You can choose a reusable Command task for the pre- or postsession shell command. Or, you can create non-reusable shell commands for the pre- or post-session shell commands. If you create non-reusable pre- or post-session shell commands, you have the option to make them into a reusable Command task. The Workflow Manager allows you to choose from the following options when you configure shell commands:
Create non-reusable shell commands. Create a non-reusable set of shell commands for the session. Other sessions in the folder cannot use this set of shell commands. Use an existing reusable Command task. Select an existing Command task to run as the pre- or post-session shell command.
Configure pre- and post-session shell commands in the Components tab of the session properties. Using Server and Session Variables You can include any server variable, such as $PMTargetFileDir, or session variables in commands in pre-session and post-session commands. When you use a server variable instead of entering a specific directory, you can run the same workflow on different Informatica Servers without changing session properties. You cannot use server variables or session variables in standalone Command tasks in the workflow. The Informatica Server does not expand server variables or session variables used in standalone Command tasks. Configuring Non-Reusable Shell Commands When you create non-reusable pre- or post-session shell commands, the commands are only visible in the session properties. The Workflow Manager does not create a Command task from the non-reusable pre- or post-session shell commands. You have the option to make a non-reusable shell command into a reusable Command task. Creating a Reusable Command Task from Pre- or Post-Session Commands If you create non-reusable pre- or post-session shell commands, you have the option to make them into a reusable Command task.
Page 17 of 20
When you select this option, the Informatica Server orders the target load on a row-by-row basis. For every row generated by an active source, the Informatica Server loads the corresponding transformed row first to the primary key table, then to any foreign key tables. Constraint-based loading depends on the following requirements:
Active source. Related target tables must have the same active source. Key relationships. Target tables must have key relationships. Target connection groups. Targets must be in one target connection group. Treat rows as insert. Use this option when you insert into the target. You cannot use updates with constraint-based loading.
Active Source The following transformations can be an active source within a mapping:
Source Qualifier Normalizer (COBOL or flat file) Advanced External Procedure Aggregator Joiner Rank Sorter Mapplet, if it contains one of the above transformations
Key Relationship When target tables have no key relationships, the Informatica Server does not perform constraint-based loading. Similarly, when target tables have circular key relationships, the Informatica Server reverts to a normal load. For example, you have one target containing a primary
Page 18 of 20
Verify all targets are in the same target load order group and receive data from the same active source. Use the default partition properties and do not add partitions or partition points. Define the same target type for all targets in the session properties. Define the same database connection name for all targets in the session properties. Choose normal mode for the target load type for all targets in the session properties.
Treat Rows as Insert Use constraint-based loading only when the session option Treat Source Rows As is set to Insert. You might get inconsistent data if you select a different Treat Source Rows As option and you configure the session for constraint-based loading.
What are the factors to be considered before configuring the repository environment?
<Need to Refer>
Apart from using the Abort function to stop a session, what is the other way to stop a session?
<Need to Refer>
Page 19 of 20
The Informatica Server suspends the parent workflow when the status of the worklet is Suspended or Suspending. You can create reusable worklets in the Worklet Designer. You can also create non-reusable worklets in the Workflow Designer as you develop the workflow. Create reusable worklets in the Worklet Designer. You can view a list of reusable worklets in the Navigator Worklets node. You can create non-reusable worklets in the Workflow Designer as you develop the workflow. Non-reusable worklets only exist in the workflow. You cannot use a non-reusable worklet in another workflow. After you create the worklet in the Workflow Designer, open the worklet to edit it in the Worklet Designer. You can promote non-reusable worklets to reusable worklets by selecting the Reusable option in the worklet properties. Configuring Worklet Properties When you use a worklet in a workflow, you can configure the same set of general task settings on the General tab as any other task. In addition to general task settings, you can configure the following worklet properties: Worklet variables. Use worklet variables to reference values and record information. You use worklet variables the same way you use workflow variables. You can assign a workflow variable to a worklet variable to override its initial value. Events. To use the Event-Wait and Event-Raise tasks in the worklet, you must first declare an event in the worklet properties. Metadata extension. Extend the metadata stored in the repository by associating information with repository objects. Declaring Events in Worklets Similar to workflows, you can use Event-Wait and Event-Raise tasks in a worklet. To use the EventRaise task, you first declare a user-defined event in the worklet. Events in one instance of a worklet do not affect events in other instances of the worklet. You cannot specify worklet events in the Event tasks in the parent workflow. Using Worklet Variables Worklet variables are similar to workflow variables. A worklet has the same set of pre-defined variables as any task. You can also create user-defined worklet variables. Like user-defined workflow variables, user-defined worklet variables can be persistent or non-persistent. You cannot use variables from the parent workflow in the worklet. Similarly, you cannot use userdefined worklet variables in the parent workflow. However, you can use pre-defined worklet variables in the parent workflow, just as you can use pre-defined variables for other tasks in the workflow. Persistent Worklet Variables To create a persistent worklet variable, select Persistent when you create the variable. When you create a persistent worklet variable, the worklet variable retains its value the next time the Informatica Server executes the worklet instance in the parent workflow. Worklet variables only persist when you run the same workflow. A worklet variable does not retain its value when you use instances of the worklet in different workflows.
The Informatica Server writes information about worklet execution in the workflow log. When you choose Suspend On Error for the parent workflow, the Informatica Server also suspends the worklet if a task in the worklet fails. When a task in the worklet fails, the Informatica Server stops executing the failed task and other tasks in its path. If no other task is running in the worklet, the worklet status is Suspended. If one or more tasks are still running in the worklet, the worklet status is Suspending.
Page 20 of 20