0% found this document useful (0 votes)
5 views30 pages

Informatica Interview Questions

The document provides detailed information on various Informatica transformations, including Filter, Router, Sorter, Joiner, Lookup, and Union transformations, along with their functionalities and differences. It also discusses caching mechanisms, strategies for optimizing performance, and specific use cases for loading and transforming data. Additionally, it covers the differences between connected and unconnected lookups, as well as the implications of using different types of caches in Informatica.

Uploaded by

Ramalinga reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views30 pages

Informatica Interview Questions

The document provides detailed information on various Informatica transformations, including Filter, Router, Sorter, Joiner, Lookup, and Union transformations, along with their functionalities and differences. It also discusses caching mechanisms, strategies for optimizing performance, and specific use cases for loading and transforming data. Additionally, it covers the differences between connected and unconnected lookups, as well as the implications of using different types of caches in Informatica.

Uploaded by

Ramalinga reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Informatica Questions

1.How to check if the target definition is already used by any other mappings

Ans: Right click on target definition → Dependencies → Unselect all → select mapping → ok and
then we will be able to see in which all mapping is the target used.

1. Load data from Flat file to Flat file, CSV to flat, CSV to CSV
1) How do you manage comma separated data in delimiter if there is comma in data
also example address column while importing Flat files in source analyzer?
Using double quotes
2. Filter Transformation
• It is connected and active transformation
• It is used to filter the rows/records based on conditions
• The rows/records which do not satisfy the condition will be dropped
• If Filter condition is TRUE then it will pass all records
• We can use multiple filters

3. Router Transformation
• It is connected and active transformation
• It is similar to Filter Transformation
• With Single transformation we can connect to multiple targets with conditions
• The rows/records which do not satisfy the condition will be in default group.

4. Sorter Transformation
• It is connected and active transformation
• It is Active transformation because we can select the distinct Option in sorter
property.
• It is used to sort data in the ascending and descending order.
Transformation Type Description
Aggregator Active Connected Performs aggregate calculations.
Expression Passive Connected Calculates a value.
Active Connected or Executes user logic coded in Java. The bytecode for
Java
Passive Connected the user logic is stored in the repository
Joins data from different databases or flat file
Joiner Active Connected
systems.
Active Connected or
Passive Connected or
Lookup and return data from a flat file, relational
Lookup Active Unconnected
table, view, or synonym.
or Passive
Unconnected
Used in the pipeline to normalize data from
Normalizer Active Connected
relational or flat file sources.
Rank Active Connected Limits records to a top or bottom range.
Routes data into multiple transformations based on
Router Active Connected
group conditions.
Active Connected or
SQL Executes SQL queries against a database.
Passive Connected
Merges data from different databases or flat file
Union Active Connected
systems.
Reads data from one or more input ports and
XML Generator Active Connected
outputs XML through a single output port.
Reads XML from one input port and outputs data to
XML Parser Active Connected
one or more output ports.
XML Source Represents the rows that the Integration Service
Active Connected
Qualifier reads from an XML source when it runs a session.
5. Types of Joins in Joiner Transformation

6.How to join two tables from different sources?

• We can use joiner, if we want to join the data sources. Use a joiner and use the matching
column to join the tables.
• We can also use a Union transformation, if the tables have some common columns and we
need to join the data vertically. Create one union transformation add the matching ports
form the two sources, to two different input groups and send the output group to the
target.
The basic idea here is to use, either Joiner or Union transformation, to move the data from
two sources to a single target. Based on the requirement, we may decide, which one should
be used.

7.What is the difference between Filter and Router?


Filter Router
1. Single Input and Single Output 1. Single Input and Multiple Output
2. We cannot get the non-satisfied 2. We can get the non-satisfied data/
data/ result result in default column

8.Why Sorter Transformation is active?


This is because we can select the “distinct” option in the sorter property. The number of Input
Rows will vary as compared with the Output rows and hence it is an Active transformation.
EXAMPLE: 28 Rows from INPUT can be 14 Rows in OUTPUT after USING Distinct option in Sorter
properties.

9.How to remove duplicates in informatica?


Ans: Using Distinct option in Sorter Transformation & Source Qualifier

10.What is data cache and Index cache?


Aggregator, Joiner, Lookup, and Rank transformations require an index cache and a data cache. The
Data Integration Service stores key values in the index cache and output values in the data cache.
Sorter transformations require a single cache. The Data Integration Service stores sort keys and the
data to be sorted in the Sorter cache.
8. Why Joiner Transformation is active?
Ans: Because of Normal Join (no. of input rows are not equal to no. of output joins)

9. Difference between Router and Union Transformation


Router - One Input and many output

Union - Many Input and one output

10. How to remove the duplicates in Union Transformation?

• Union transformation does not remove duplicates. To remove the duplicate rows, use sorter
transformation with "select distinct" option after the union transformation.
• The union transformation does not generate transactions.
• You cannot connect a sequence generator transformation to the union transformation.
12. Why Union transformation is active transformation? Important
Ans: Union Transformation is created by Custom Transformation that’s why is active transformation.

11. Update Strategy Transformation


• It is Connected and Active transformation.
• It is used to mark the input record before passing to the target.
• Marking for INSERT or UPDATE or DELETE or REJECT.
• The pre requisite for Update strategy is that the target table should have primary key.

Update Strategy can be implemented in two ways

1) Mapping Level: Using Update Strategy Transformation by mentioning the flag


2) Session Level: By selecting property option called “Treat Source Row as”
11. Can we change from relation target to flat file target?
Ans: Yes, through workflow manager settings

Row Indicators
The first column in the reject file is the row Indicator

Column Indicator
After the column indicator is a column indicator, followed by the first column of data and
another column indicator

Column indicator appears after every column of data and defined the type of data preceding it.
By default, rejected rows are written into reject and session log.

If it’s not Data Driven then first preference will be given to session level only.

Difference between Mapping level Update Strategy and Session Level Update strategy

Mapping level Update Strategy – We can Insert, Update, Delete and Reject specific records based
on conditions.

Session Level Update strategy – Here when marked it will Insert, Update, Delete and Reject all
records that means we cannot Insert, Update, Delete and Reject based on conditions.
14. Lookup Transformation
• It is similar to Joiner Transformation
• It is used to lookup the data from Source, Target, Source Qualifier & Relational Database.
• There will be two tables in Lookup Transformation one is called Input table and another one
is called Lookup table.

Types of Lookups: -

a) Based on rows: -
1) Passive (default): - no of output rows always equal to no of input rows
2) Active: - no of output rows always not equal to no of input rows

After 9th version onwards it is Active or Passive Transformations

Lookup can be connected or unconnected transformation

b) Based of pipeline connection


1) Connected
2) Unconnected
3) Pipeline transformation (started from 9th version)
Lookup can be as below

We also have 1) Shared and Unshared Cache

2) Named and Unnamed Cache

What type of Join Lookup is going do?

Ans: Left outer join.

What are the types of Caches in lookup? Explain them.


Based on the configurations done at lookup transformation/Session Property level, we can have
following types of Lookup Caches.

• Un- cached lookup– Here, the lookup transformation does not create the cache. For each
record, it goes to the lookup Source, performs the lookup and returns value. So for 10K
rows, it will go the Lookup source 10K times to get the related values.
• Cached Lookup– In order to reduce the to and fro communication with the Lookup Source
and Informatica Server, we can configure the lookup transformation to create the cache. In
this way, the entire data from the Lookup Source is cached and all lookups are performed
against the Caches.

Based on the types of the Caches configured, we can have two types of caches, Static and Dynamic.

The Integration Service performs differently based on the type of lookup cache that is configured.
The following table compares Lookup transformations with an uncached lookup, a static cache, and
a dynamic cache:

Persistent Cache

By default, the Lookup caches are deleted post successful completion of the respective sessions but,
we can configure to preserve the caches, to reuse it next time.

Shared Cache

We can share the lookup cache between multiple transformations. We can share an unnamed cache
between transformations in the same mapping. We can share a named cache between
transformations in the same or different mappings.

Cache and Uncache working are same no difference, result is same only performance will be
improved.

To improve the performance, Informatica will read all the data from lookup table and then load it
into the cache first, then read one by one from the input source and compare directly with the
cache. Result will be loaded to target and in this way the performance will be improved.

If cache is enabled then static cache, is default.

Data from condition column of lookup table is stored in Index cache and data from remaining
column of lookup will be stored in Data cache.
Lookup Properties

What is the difference between Joiner and Lookup?


Joiner Lookup
1. Joiner always uses cache 1. Lookup has both Cache & Uncache
2. In joiner we cannot override the query 2. In lookup we can override the query
3. In joiner only “= “ (equal to) operator is 3. In lookup we can provide different types of
available operators like – “>, <,>=, <=, !=”
4. In joiner we can join the tables based on- 4. In lookup this facility is not available. Lookup
Normal Join, Master Outer, Detail Outer and Full behaves like Left Outer Join of database
Outer Join
5. In joiner we cannot restrict the number of 5. In lookup we can restrict the number of rows
rows while reading. while reading the relational table using lookup
override

Joiner and Lookup both will join, both will do left outer join which one is
better?
Ans: Joiner is always cache that is the data from Master is stored in cache and in Lookup the data
from lookup is stored in cache.

Data cache and Index cache are there in both but in lookup we are not going to select types left
outer and right outer option is not there and by default its left outer.
Persistence Cache

Persistence Cache is created in the Disk at the end of the session and the content from
static/Dynamic cache is written into the Disk, this file is called persistence cache.

The data written into the disk at the end of the session is called Persistence Cache (Permanent
Cache).

While using Persistence Cache, in the 1st run static cache is created by lookup and in the 2nd run
static cache is created by Persistence Cache

Do not go for Persistence Cache if the lookup table values are changing more often as it slows down
the performance. Enable only if the lookup table values do not change often.

Persistence Cache is used to improve the performance as it is created in system Disk itself.

Disadvantage of Persistence Cache is that it won’t get updated if the lookup table data is updated.

Solution? – Enable the Re-cache from lookup source.

Static Cache is created in the RAM. At run time data present in cache is not going to change.

Persistence Cache is created in the Disk

Dynamic Cache is created in RAM. At run time data present in cache is going to change. It is faster in
Type 1 mapping. We can use Dynamic Cache in type 1 and not in type 2.

Static Cache is created in the RAM and Persistence Cache is created in the Disk
Difference between static and dynamic cache?
Lookup Query Override
Generate SQL

Unconnected Lookup
It is not connected to the pipeline

Unconnected Lookup is called by Expression Transformation.

Any one port has to be selected as return port.

Unconnected lookup has only one return port and returns one column from each row.

The major advantage of unconnected lookup is its reusability. We can call an unconnected lookup
multiple times in the mapping unlike connected lookup.
We can use the unconnected lookup transformation when we need to return the output from
a single port.

Unconnected doesnot participate in the dataflow so informatica server creates a seperate cache
for unconnected and processing takes place parallely. so performance increases.

The major advantage of unconnected lookup is its reusability. We can call an unconnected lookup
multiple times in the mapping unlike connected lookup.

We can use the unconnected lookup transformation when we need to return the output from
a single port.

Unconnected does not participate in the dataflow so informatica server creates a separate cache
for unconnected and processing takes place parallelly. so, performance increases.
Pipeline Lookup

Variable & Parameters


It is a temporary memory used at run time: -

1. Variable- Value present in memory can be changed

2. Parameter - Value present in memory cannot be changed


What is meant by Target load plan?
Target load plan is used to specify the order in which the integration service loads the targets. You
can specify a target load order based on the source qualifier transformations in a mapping. If you
have multiple source qualifier transformations connected to multiple targets, you can specify the
order in which the integration service loads the data into the targets.

What is the difference between STOP and ABORT options in Workflow


Monitor?
On issuing the STOP command on the session task, the integration service stops reading data from
the source although it continues processing the data to targets. If the integration service cannot
finish processing and committing data, we can issue the abort command.

ABORT command has a timeout period of 60 seconds. If the integration service cannot finish
processing data within the timeout period, it kills the DTM process and terminates the session

1. How do you Load comma delimited Source File to || delimited Target File?
Ans: We can do this while importing the target definition in Target analyzer by change
the delimit option to ||or else in the workflow → Mapping → Set File Properties →
Advanced →Column Delimiter

2. There are 2 tables from 2 different sources EMP and Dept with PK and FK relationship
In Dept table I have a column called Employee_Count which is not coming from source
table, I have to load this Employee_Count from EMP table but as per my client request,
I’m not supposed to join or lookup EMP table and I’m supposed to Employee_Count in
dept table.
Can you develop a one single mapping?
3. Set variable and set max?
4. Have you worked on the SLA where if SLA is not met and paid the fine?
5. How can we optimize joiner Transformation?

• When joining between two data sources, treat the data source containing less
number of records as Master. This is because the Cache size of the Joiner
transformation depends on master data (unless sorted input with the same
source is used).
• Ensure that both the master and detail input sources are sorted and both
“Sorted Input” and “Master Sort Order” ports are checked and set

• Check if the Data and Index cache sizes can be configured.

6. How we can optimize aggregator

Ensuring the input data is sorted is absolutely must in order to achieve better
performance and we will soon know why.

Other things that need to be checked to increase aggregator performance are


• Check if “Case-Sensitive String Comparison” option is really required. Keeping


this option checked (default) slows down the aggregator performance
• Enough memory (RAM) is available to do the in-memory aggregation.
• Aggregator cache is partitioned

7. Their favorite question is difference between un connected and connected look up.

8. Can we use sql functions in sql override


9. What is persistent cache and how it’s different from static cache
Persistence Cache
The data written into the disk at the end of the session is called Persistence Cache
(Permanent Cache).
While using Persistence Cache in the 1st run static cache is created by lookup and in the
2nd run static cache is created by Persistence Cache.
Static cache is created in the RAM and Persistence Cache is created in the Disk
Persistence Cache is used to increase the performance as it is created in system Disk
itself.
Disadvantage of Persistence Cache is that it won’t get updated if the lookup table data is
updated.
Solution? – Enable the Re-cache from lookup source.
10. How we can optimize look up
11. Difference between source qualifier and filter

Source Qualifier Transformation Filter Transformation

1. It filters rows while reading the data from a source. 1. It filters rows from within a mapped data.

2. Can filter rows only from relational sources. 2. Can filter rows from any type of source system.

3. It limits the row sets extracted from a source. 3. It limits the row set sent to a target.

4. It enhances performance by minimizing the number 4. It is added close to the source to filter out the
of rows used in mapping. unwanted data early and maximize performance.

5. In this, filter condition uses the standard SQL to 5. It defines a condition using any statement or
execute in the database. transformation function to get either TRUE or FALSE.

12. Different types of tasks?


13. Can we validate n number of mappings in one time?
Ans: Yes
14. Difference between delete and truncate
15. How will you implement SCD type 2
16. Difference between router and filter?

Filter Router
3. Single Input and Single Output 3. Single Input and Multiple Output
4. We cannot get the non-satisfied 4. We can get the get the non-
data/ result satisfied data/ result in default
column

17. I have to run a workflow on daily basis at 6.30 pm in the evening. What can we do?
18. In SQL they asked about indexes and partition.
19. Also, they asked me about mapplet and mappings
20. Difference between partition and group by
21. Types of errors in Informatica

a) Threshold Errors (Non-Fatal)


• Reader errors.
Errors encountered by the Integration Service while reading the source
database or source files. Reader threshold errors can include alignment
errors while running a session in Unicode mode.
• Writer errors.
Errors encountered by the Integration Service while writing to the target
database or target files. Writer threshold errors can include key constraint
violations, loading nulls into a not null field, and database trigger responses.
• Transformation errors.
Errors encountered by the Integration Service while transforming data.
Transformation threshold errors can include conversion errors, and any
condition set up as an ERROR, such as null input.

b) Fatal Errors
A fatal error occurs when the Integration Service cannot access the source,
target, or repository. This can include loss of connection or target database
errors, such as lack of database space to load data. If the session uses a
Normalizer or Sequence Generator transformation, the Integration Service
cannot update the sequence values in the repository, and a fatal error
occurs.
If the session does not use a Normalizer or Sequence Generator
transformation, and the Integration Service loses connection to the
repository, the Integration Service does not stop the session. The session
completes, but the Integration Service cannot log session statistics into the
repository.
You can stop a session from the Workflow Manager or though pmcmd.
You can abort a session from the Workflow Manager. You can also use the
ABORT function in the mapping logic to abort a session when the Integration
Service encounters a designated transformation error.
Shivraju gm 9500 - 8861327737

Difference between Star and Snowflake Schema


Star Schema Snowflake Schema

Contains the fact tables and the Contains the fact tables, dimension tables as well as
1. dimension tables. sub dimension tables.

2. Star schema is a top-down model. While it is a bottom-up model.

3. Normalization is not used. Both normalization and denormalization are used.

4. It has less number of foreign keys. While it has more number of foreign keys.

5. It has high data redundancy. While it has low data redundancy.


Star Schema

Snowflake Schema
HCL- Sourab

Wipro

Birlasoft

HCL- Shivu 1 & 2

Capgemini Separate Interview

FIS Global

Capgemini Separate Interview

Deloit

Tech Mahindra 1 & 2

Capgemini Separate Interview

Capgemini Separate Interview

MicroLand

HCL (Shivu) Interview Questions VJ


1. What are your day-to-day activities in your current role?
2. All Sources are Flat files only or both Flat files and DB?
3. What are the transformations that you have worked on so far in Informatica?
4. Have you worked on scd slowly changing dimensions?
5. Can you give examples for scd type 1 and type 2?
6. Can you explain the flow of scd type 2?
7. Do you have any experience on reporting tools or schedule tools?
8. What are the scheduler?
9. Do you have idea on Unix?
10. Have you done any improvement in your L3 support?
11. Have you used any indexes in the database level?
12. Have you ever implemented pushdown Optimisation techniques?
13. Have you worked on any session partitions?
14. If there is a workflow running on a daily basis and a particular session completes in
10 minutes and today it is taking 5 hours and it is still running what could be the issue
and how do you resolve it?
15. Have you faced any challenges in your L3 support?
16. What is the scheduling tool?
17. In SQL have you worked on functions, triggers, or procedures?
18. What is the difference between rank and dense rank?
19. In router transformation, I have two conditions one is greater than 6 and another one
is greater than 3 and the value is 8 what will be the output for target 1 and target 2?
20. Do you have any idea on lookup catches?
21. What is the difference between static cache and dynamic cache?
22. What is dynamic cache?
23. When do you use static catch and when do you use dynamic catch can you tell me
the scenarios?
HCL Interview Questions Raghav
• What kind of source files have you worked on till now?
• How are you handling to Informatica?
• Have you worked on indirect file system?
• How do you update the record in the target table without using update strategy and
lookup transformation?
• Have you worked on normalization?
• Do you have experience in UNIX scripting
• Have you implemented FHTP process
• Which command you use to Trigger Informatica workflow from Unix
• Delete the duplicate rows query
• Do you have any idea on scheduling tools
• Do you have knowledge on support project
• Have you done any of performance or bottleneck that had come up and worked upon
• Load use amount of data how will you do performance tuning for bottleneck what are
the major performance tuning
• Where is Informatica mounted on Linux or Windows?

Birlasoft Interview Questions


1. Difference between b-tree and bitmap index tips
2. How to declare a cluster Index on a Table
3. What is explain plan of SQL?
4. What is the difference between Static and Dynamic Cache in Informatica PowerCenter?
5. If we have a table containing multiple duplicate records, can we use the Dynamic lookup for
this table?
6. What is the use of new lookup port in the Dynamic lookup?
7. What is the difference Connected and Unconnected lookup?
8. Can we return multiple value from Unconnected lookup?
9. How to call a Unconnected lookup from expression?
10. What are the various ways to call a stored procedure?
11. What are the different types of stored procedure?
12. What is target load plan in mapping and what is its use?
13. What is the difference between Mapping Variable and Mapping Parameters?
14. If you want to use variable from one mapping to another mapping, how can we achieve
that?
15. What are the different tracing level in session log?
16. What is the difference between Stop and Obort option?
17. What is control task and what is the use of control task?
18. What is decision making task?
19. What is assignment task?
20. What is the difference between Star Schema and Snowflake schema?
21. What is Fact and Dimension?
22. What is role playing Dimension?
23. What is Junk Dimensions?
24. What is difference between SCD Type 2 and SCD Type 3?
i-Link Interview Questions
1. What is factless fact
2. Explain the complete project flow?
3. Which are the dimension and fact tables used in your current project
4. What are the non additive and fully additive columns in your current project
5. Difference between connected and unconnected lookup
6. Types of join in joiner transformation
7. How to do cross join in Informatica
8. Different ways to remove duplicates in Informatica
9. Different ways to pass only 20 columns to target from Android columns in source.
Which is the best way
10. Passing different country data to different targets
11. Data cache and index cache how does it work on in joins
12. What is performance tuning
13. Types of scd and differences between them
14. Can we achieve update strategy other than mapping
15. How to remove duplicates and parse data in target from multiple source files
16. Can we pass data table with 2 columns first name and last name using lookup
transformation
17. Find second highest salary
18. Display only unique records or data
19. Types of joints
20. Difference between rank and dense rank
21. What will be the result if we use in rank in place of dense rank in removing duplicate
22. Difference between view and materialized views

Capgemini 1

1. When did you last use Informatica Power Centre?


2. What is your current role and can you please explain the day-to-day process?

Ans: I am working for a project called GTSC where in play a role of Informatica developer my
responsibility here is to load the data from Staging area to the respective dimension and fact table
implementing the logic whatever is mentioned in the expect a mapping sheet provided by our data
modeller
we will be having a daily scrum call where Our scrum master will be as having the ticket to US and
only after having the ticket we will be getting to know on which particular table we will be working
upon and for the respective documentation we will be looking into the SharePoint where the
documentations will be uploaded by the modulus and then we will get to know what is our source
table target table and the columns that we need to map and the logic that we have to implement as
I told you I am call responsible for the data to be loaded with respect to dimensions and fact and If in
case I have any issues in terms of in availability of the target table or anything else technically I will
be getting in touch with the respective teams like the models are the business analyst to understand
more on the subject and requirement and will be working towards fulfilling the same

so we work based on the report specific requirement where our client says for a for this particular
report we are in need of these a number of source columns from a specific source table based on
which that expects will be created and on top of that we will be working on creating or editing the
existing mapping to make sure the required columns are available in the respective dimensions and
facts
Having this as my regular primary roles and responsibility we also work on the other Quest when and
where required this also request will be created by Our scrum master and will get event wise it
might be something like writing a simple SQL query for designing a mapping for doing a production
support or something like that

3. What is the type of project you work on?


Ans:
4. You getting the data from flat file, right? What are all the tables you have in your project?
Ans: We have many table for ex: on the dimension side I can recall the table like d_cust,
d_item, d_loc, d_dt, etc., and on the fact side we 2 types of fatcs f_sale_header (periodic)
and F_sale_line (Transaction)
5. Let’s assume we have customer details and the product type tables for these 2 tables how
many files do you receive?
6. Tell me one project that you were involved in development and you completed that project?
7. Can you help me understand with the architecture of your completed project? Can you help
me understand the ER Model of the project and what all table you have designed and what
are the different types of design methodology you used?
8. How that Dimension and Facts table were developed and how these Dimension and Facts
were communicating, Design prospective as well as architecture prospective?
9. Different types of Dimensions?
10. What kind of data is stored in Fact table?
11. Let me ask you a simple design question, Let's assume you have a 50 GB of data which is
coming daily and you need to build a system where you need to load this 50 GB of data into
the Oracle database and you get this 50 GB of data on a daily basis considering the load
efficiency you need to load the data into Oracle database and if the record is already existing
in your database you may need to skip it and if the record is new you need to insert it and if
the record is updated with the additional field you need to update the existing database In
order to do this what design approach do you consider? Make sure you consider the load
execution window
12. What is the actual outcome of SCD type 1?

13. Do you think if you implement SCD Type 1 for 50 GB of data will it work as expected?
14. Help me understand your approach of design to implement this pipeline, explain me the
flow of mapping so that I can evaluate your design approach
15. What are the different types of transformations that you have used?
16. What is the prerequisite for using update transformation?
17. Tell me how the aggregator transformation works?
18. Difference between filter and router?
19. How to find the second highest salary from an employee table using joiners?

Capgemini 2 Interview
1. I have a very simple mapping source qualifier expression and target I am trying to
read a table of 50 rows and insert it into a database table after the workflow is
completed, I only see one row in the target table remaining 49 rows are dropped so
when I run the debugger against the job I see that all the rows are coming until
source qualifier are fine and also passing to the expression is fine. From expression
transformation, only one row comes out remaining 49 rows gets dropped out of
expression transformation and do not even pass to the target what do you think is a
problem here what would be happening

Ans: There is a problem with the data in the source file.

2. I have a file with header 10 columns and has 5 Data rows with values for all 10
columns separated by comma After the five rows, there are five more data rows and
the value is only from the column 1 to 5 Again separated by comma but after the fifth
column there is no comma, no nulls defined for the remaining five columns
Q: How do you define a source analyzer in a way that it it can read or 10 rows at the same
time without dropping anything using a single source definition and then write it to a target
database with 10 columns

FIS Global
1. Can you tell me the roles and responsibilities of your current project?
2. Print the name Jags as J a g s that is one letter below another letter?
3. How many numbers of A in the string Jagadish?
4. What is the result of Left Join, Right Join, and Full Join for the given tables?
5. How to get the cumulative sum of salary in a given table?
6. Sources is coma De-limited flat-file target is also flat-file now you have to load the coma de-
limited data to your target which is pipe-delimited How do you load that?
7. I have two tables employee table and department table both tables share foreign key and
primary key relationship I have to develop one single mapping which loads both the table
The employee table is loaded from the employee source and the department table is loaded
from the department source
In the department table, I have a column called count which is not coming from my source
table so I have to load this employee count from my employee table but as per my client's
request I am not supposed to do a lookup or use the joiner to the employee table and
department table ok so without using joiner or lookup I am still supposed to load the
employee count in the target table is there a chance that I can do this?
8. I have two tables employee table and department table and I have 2 mappings one mapping
load employee table and another mapping loads department table So how can I get the
employee count with two different mappings
9. how many functions have you used while working on expression transformation?
10. What is set variable and set max variable?
11. Did your project have any SLAs?
12. What are the different ways to remove duplicates in Oracle database?

Tech Mahindra Round 1


1. Have you heard about field matching?
2. What is the best approach if using the first name as a match field?
3. What is the mail ability score?
4. 100 Doors and in what state the door is after the last pass?
5. How do you load alternate records?
6. How to parameterize reference tables to which transform can support parameterized
reference tables?
7. How to run the developer mapping from powercenter without exporting it to power
center?
8. Have you heard about the update strategy?
9. How do you update the records without using the update strategy?
10. Do you know about the data-driven in update strategy?
11. Have you heard about the snowflake platform?
12. Can you please explain the snowflake platform to me?
13. Can we convert data to character in Oracle, if so what is the syntax?
14. Have you heard about mapplet?
15. What is the difference between mapplet and reusable transformation
16. Please explain the difference between stop and abort?
17. Have you heard about Complex mapping
18. How can one identify if the mapping is correct or not without connecting to a session?
19. Explain the use of aggregator cache file
20. What is lookup transformation? explain in brief?
21. What is code page compatibility?
22. What is aggregator transformation?
23. What are dimensions and Fact tables?
24. What is a standalone command task?
25. What does a command task mean?
26. What is the workflow in Informatica?
27. What are the different tools of workflow manager?
28. What is a cube?
29. What is the size of the cube?
30. What is OLAP?
31. What is target load order or target load plan?
32. Where can we find the throughput option in Informatica? (In Workflow Monitor)
33. What do you mean by Pre SQL and post SQL?
34. What is the difference between connected lookup and unconnected lookup?
35. How Union transformation is used? (Used to combine data from different sources)
36. What is incremental aggregation?
37. How can you validate all the mappings in the repository simultaneously? (We cannot
validate simultaneously because we can validate one by one at a time)
38. Have you heard about playing dimension?

Tech Mahindra Round 2


1. What is your exact role? What kind of project is it and what is your role in that project
do you handle it individually or as a team?
2. Who handles the testing and production activity?
3. Could you please explain your project architecture?
4. What are the logics that you used while loading into data marts
5. Have you work done designing the low-level design and high-level design documents
6. How do you schedule the workflows and how do you handle the dependencies in
your application?
7. What are the Optimisation techniques that you have worked on in Informatica and in
the Oracle DB level? (Performance tuning and Optimisation techniques)
8. Where do we get to see all the percentage things while checking for any bottlenecks I
mean where do you check all these details?
9. You identify the bottleneck is at the target what will you do to optimize it?
10. What are the things you will be dropping and recreating in order to optimize
11. Why do you think the constraints will impact the loading
12. What are the constraints you worked on a daily basis?
13. What is the difference between primary key, unique key, foreign key, and composite
key?
14. How do you do the Optimisation at the database level while doing the performance
tuning? Ans: Partitions
15. Say you have defined range partition and the value that you are getting is out of
range what do you do at this time?
16. Do you know about explain plan and what are the things that you can do in explain
plan?
17. Have you worked on Merge statement in DB?What can we do using Merge
statement in DB?
18. What is the difference between analytical and aggregate functions?
19. What is the difference between dense rank and rank?
20. Say you have defined an index to table while running a query on the table Intex is not
taking effect what may be the cause, what are the reasons that the index might be
suppressed?
21. Have you work done with clause? What is the advantage of using with clause?
22. What is the difference between where and having clause?
23. What is the difference between nested subquery and correlated subquery?
24. What are the advanced Optimisation techniques in Informatica? Ans: (Session
Partitioning)
25. How does pushdown Optimisation work and what are the types?
26. What is the difference between source side and target size pushdown Optimisation?
27. What is target load order and constraint based loading in Informatica?
28. Can we implement scd Type 1 and type 2 together?
Ans: We can implement scd1 and scd2 in different pipelines with same source in a single
mapping.
29. What level of history will be stored in SCD Type 1, type 2 and type 3?
30. What level of history will be stored in scd type 2?
31. How do you handle parameters in Informatica, How do you use in your workflow and
how can you generate parameters dynamically?
32. What is the difference between row ID and Row Num?
33. How do you implement lead and lag functions in Informatica?

You might also like