0% found this document useful (0 votes)
700 views9 pages

Ab Initio Scenario Based Interview Questions and Answers

Uploaded by

Pavan Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
700 views9 pages

Ab Initio Scenario Based Interview Questions and Answers

Uploaded by

Pavan Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Ab Initio Scenario Based Interview Questions

And Answers
1. Mention what information does a .dbc file extension provides to connect to the
database?
Answer: The .dbc extension provides the GDE with the information to connect with the database
are

 Name and version number of the database to which you want to connect
 Name of the computer on which the database instance or server to which you want to
connect runs, or on which the database remote access software is installed
 Name of the server, database instance or provider to which you want to link.

2. What is a data processing cycle and what is its significance?


Answer: Data often needs to be processed continuously and it is used at the same time. It is
known as a data processing cycle. The same provides results which are quick or may take extra
time depending on the type, size, and nature of data. This is boosting the complexity in this
approach and thus there is a need for methods that are reliable and advanced than existing
approaches. The data cycle simply makes sure that complexity can be avoided up to the possible
extent and without doing much.

3. Suppose we assign you a new project. What would be your initial point and the key steps
that you follow?
Answer: The first thing that largely matters is defining the objective of the task and then engages
the team in it. This provides a solid direction for the accomplishment of the task. This is
important when one is working on a set of data which is completely unique or fresh. After this,
the next big thing that needs attention is effective data modeling. This includes finding the
missing values and data validation. The last thing is to track the results.

4. What do you mean by the term data warehousing? Is it different from Data Mining?
Answer: Many times there is a need to have data retrieval, warehousing can simply be
considered to assure the same without affecting the efficiency of operational systems. It simply
supports decision support and always works in addition to the business applications and
Customer Relationship Management and warehouse architecture. Data mining is closely related
to this approach. It assures simple findings of required operators from the warehouse.

5. Have you ever encountered an error called “depth not equal”?


Answer: When two components are linked together if their layout does not match then this
problem can occur during the compilation of the graph. A solution to this problem would be to
use a partitioning component in between if there was a change in layout.

6. What is a cursor? Within a cursor, how would you update fields on the row just fetched?
Answer: The oracle engine uses work areas for internal processing in order to the execute sql
statement is called cursor. There are two types of cursors like Implicit cursor and an explicit
cursor. An implicit cursor is using for internal processing and Explicit cursor is using for user
open for data required.

7. What are Cartesian joins?


Answer: A Cartesian join will get you a Cartesian product. A Cartesian join is when you join
every row of one table to every row of another table. You can also get one by joining every row
of a table to every row of itself.

8. Can anyone give me an example of realtime start script in the graph?


Answer: Here is a simple example to use a start script in a graph:

In the start, script lets give as:


export $DT=`date ‘+%m%d%y’`
Now this variable DT will have today’s date before the graph is run.
Now somewhere in the graph transform, we can use this variable as;
out.process_dt::$DT;
which provides the value from the shell.

9. What is skew and skew measurement?


Answer: skew is the measure of data flow to each partition.
suppose i/p is coming from 4 files and size is 1 GB
1 gb= ( 100mb+200mb+300mb+5oomb)
1000mb/4= 250 MB
(100- 250 )/500= –> -150/500 == cal urself it will come in -ve value.
calculus for 200,500,300.
+ve value of skew is always desirable.
skew is an indirect measure of the graph.

10. Do you think effective communication is necessary for data processing? What is your
strength in terms of same?
Answer: The biggest ability that one could have in this domain is the ability to rely on the data
or the information. Of course, communication matters a lot in accomplishing several important
tasks such as the representation of the information. There are many departments in an
organization and communication make sure things are good and reliable for everyone.

11. Describe in detail about lookup?


Answer: A group of the keyed dataset is said called lookup. The datasets in lookup can be
classified into two types such as Static and Dynamic. In the case of dynamic datasets, the lookup
file would be generated in the previous phase and used in the current phase. With respect to the
data present in a particular multi/serial file, lookup can be used to map values.

12. What kind of layouts does Abinitio support?


Answer:

• Abinitio supports serial and parallel layouts.


• A graph layout supports both serial and parallel layouts at a time.
• The parallel layout depends on the degree of the data parallelism
• A multi-file system is a 4-way parallel system
• A component in a graph system can run 4-way parallel system.

13. What is a local lookup?


Answer:
• Local lookup file has records which can be placed in main memory
• They use transform function for retrieving records much faster than retrieving from the disk.

14. Mention what is the role of Co-operating system in Abinitio?


Answer: The Abinitio co-operating system provides features like Manage and run Abinitio
graph and control the ETL processes
Provide Abinitio extensions to the operating system
ETL processes monitoring and debugging
Meta-data management and interaction with the EME.

15. Mention what is Abinitio?


Answer:
“Abinitio” is a Latin word meaning “from the beginning.” Abinitio is a tool used to extract,
transform and load data. It is also used for data analysis, data manipulation, batch processing,
and graphical user interface based parallel processing.

16. Mention what is Rollup Component?


Answer: The roll-up component enables the users to group the records on certain field values. It
is a multiple-stage function and consists of initializing 2 and Rollup 3.

17. What is the importance of EME in abinitio?


Answer: EME is a repository in Ab Ignition and is used for check-in and checkout for graphs
also maintains graph version.

18. What are the steps to create a repository in AB Initio?


Answer: If you have installed AB Initio is a standalone machine, then there is no need to create
a separate repository as it will be created automatically during the installation process. You could
be able to view the newly created automated repository under AB Initio folder. ( hadoop training
online )

19. What would be the next step after collecting the data?
Answer: Once the data is collected, the next important task is to enter it in the concerned
machine or system. Well, gone are those days when storage depends on papers. In the present
time, data size is very large and it needs to be performed in a reliable manner. A digital approach
is a good option for this as it simply lets users perform this task easily and in fact without
compromising with anything. A large set of operations then need to be performed for the
meaningful analysis. In many cases, the conversion also largely matters and users are always free
to consider the outcomes which best meet their expectations.
20. Suppose you find the term Validation mentioned with a set of data, what does that
simply represent?
Answer: It represents that the concerned data is clean, correct and can thus be used reliably
without worrying about anything. Data validation is widely regarded as the key points in the
processing system.

21. How scientific data processing is different from commercial data processing?
Answer: Scientific data processing simply means data with a great amount of computation i.e.
arithmetic operations. In this, a limited amount of data is provided as input and a bulk data is
there at the outcome. On the other hand, commercial data processing is different. In this, the
outcome is limited as compared to the input data. The computational operations are limited in the
commercial data processing.

22. Name any two stages of the data processing cycle and provide your answer in terms of a
comparative study of them?
Answer: The first is Collection and the second one is the preparation of data. Of course, the
collection is the first stage and preparation is the second in a cycle dealing with data processing.
The first stage provides a baseline to the second and the success and simplicity of the first
depends on how accurately the first has been accomplished. Preparation is mainly the
manipulation of important data. The collection breaks data sets while Preparation joins them
together.

23. What do you mean by a transaction file and how it is different from that of a Sort file?
Answer: The Transaction file is generally considered to hold input data and that is for the time
when a transaction is under process. All the master files can be updated with it simply. Sorting is
done to assign a fixed location to the data files on the other hand.

24. Do you know what a local lookup is?


Answer: If your lookup file is a multifile and partitioned/sorted on a particular key then local
lookup function can be used ahead of the lookup function call. This is local to a particular
partition depending on the key.
Lookup File consists of data records which can be held in main memory. This makes the
transform function to retrieve the records much faster than retrieving from disk. It allows the
transform component to process the data records of multiple files fast.

25. How many components in your most complicated graph?


Answer: It depends on the type of components you us. Usually, avoid using much-complicated
transform function in a graph.

26. Have you worked with packages?


Answer: Multistage transform components by default use packages. However, the user can
create his own set of functions in a transfer function and can include this in other transfer
functions.

27. What are the different forms of output that can be obtained after processing of data?
Answer: These are
1. Tables
2. Plain Text files
3. Image files
4. Maps
5. Charts
6. Vectors
7. Raw files

Sometimes data is required to be produced in more than one format and therefore the software
accomplishing this task must-have features available in it to keep up the pace in this matter.

28. What exactly do you know about the typical data analysis?
Answer: It generally involves the organization as well as the collection of important files in the
form of important files. The main aim is to know the exact relationship among the industrial data
or the full data and the one which is analyzed. Some experts also call it one of the best available
approaches to find errors. It entails the ability to spot problems and enable the operator to find
out the root causes of the errors.

29. How to add default rules in the transformer?


Answer: Add Default Rules — Opens the Add Default Rules dialog. Select one of the following:
Match Names — Match names: generates a set of rules that copies input fields to output fields
with the same name. Use Wildcard (.*) Rule — Generates one rule that copies input fields to
output fields with the same name.

1) If it is not already displayed, display the Transform Editor Grid.


2) Click the Business Rules tab if it is not already displayed.
3) Select Edit > Add Default Rules.

In case of reformat if the destination field names are same or a subset of the source fields then no
need to write anything in the reformat xfr unless you won’t want to use any real transform other
than reducing the set of fields or split the flow into a number of flows to achieve the
functionality.

30. How do you truncate a table?


Answer: From Abinitio run sql component using the DDL “truncate table by using the truncate
table component in Ab Initio.

31. Describe the Grant/Revoke DDL facility and how it is implemented?


Answer: Basically, This is a part of D.B.A responsibilities GRANT means permissions, for
example, GRANT CREATE TABLE, CREATE VIEW AND MANY MORE.

REVOKE means cancel the grant (permissions). So, Grant or Revoke both commands depend
upon D.B.A..
32. How would you find out whether a SQL query is using the indices you expect?
Answer: Explain plan can be reviewed to check the execution plan of the query. This would
guide if the expected indexes are used or not.

33. What is the purpose of having stored procedures in a data-based?


Answer: Main Purpose of Stored Procedure for reducing the network traffic and all sql statement
executing in cursor so speed too high.

34. How do you convert 4-way MFS to 8-way MFS?


Answer: To convert 4 way to 8 way partition we need to change the layout in the partitioning
component. There will be separate parameters for each and every type of partitioning eg.
AI_MFS_HOME, AI_MFS_MEDIUM_HOME, AI_MFS_WIDE_HOME, etc.
The appropriate parameter needs to be selected in the component layout for the type of
partitioning.

35. What is $mpjret? Where it is used in ab-initio?


Answer: You can use $mpjret in and script like
if 0 -eq($mpjret)
then
echo “success”
else
mailx -s “[graphname] failed” mailid.

36. What is the difference between a Scan component and a RollUp component?
Answer: Rollup is for the group by and Scan is for the successive total. Basically, when we need
to produce a summary then we use scan. Rollup is used to aggregate data.

37. What is the Difference between DML Expression and XFR Expression?
Answer: The main difference b/w DML & xfr is that
DML represents the format of the metadata.
XFR represent the transform functions.which will contain business
rules

38. How can I run the 2 GUI merge files?


Answer: Do you mean by merging Gui map files in WR. If so, by merging GUI map files in
GUI map editor it won’t create a corresponding test script. without test script, you cant run a file.
So it is impossible to run a file by merging 2 GUI map files.

39. What is the difference between rollup and scan?


Answer: By using rollup we cant generate cumulative summary records for that we will be using
scan.

40. What is common among data validity and Data Integrity?


Answer: Both these approaches deal with errors related to errors and make sure of the smooth
flow of operations that largely matters.
41. Name the different type of processing based on the steps that you know about?
Answer: They are:

1. Real-Time processing
2. Multiprocessing
3. Time-Sharing
4. Batch processing
5. Adequate Processing

42. What is the diff b/w look-up file and look-up, with a relevant example?
Answer: Generally, the Lookup file represents one or more serial files (Flat files). The amount
of data is small enough to be held in memory. This allows transform functions to retrieve records
much more quickly than it could retrieve from Disk.

43. How to run a graph infinitely?


Answer: To run a graph infinitely…The .ksh graph file should be called by the end script in the
graph.
If the graph name is abc.mp then the graph should call the abc.ksh file. company

44. Mention how can you connect EME to Abinitio Server?


Answer: To connect with Abinitio Server, there are several ways like

 Set AB_AIR_ROOT
 Login to EME web interface- http://serverhost:[serverport]/abinitio
 Through GDE, you can connect to EME data-store
 Through air-command

45. How can you force the optimizer to use a particular index?
Answer:
Use hints /*+ */, these acts as directives to the optimizer

46. What are the operations that support avoiding duplicate record?
Answer: Duplicate records can be avoided by using the following:

 Using Dedup sort


 Performing aggregation
 Utilizing the Rollup component

47. What is m_dump?


Answer: m_dump command prints the data in a formatted way.

48. What is the latest version that is available in Ab-initio?


Answer:
The latest version of GDE ism1.15 AND Co> the operating system is 2.14
49. What are the differences between different versions of Co-op?
Answer: 1.10 is a monkey version and rest are key versions. There are a lot of components
added and revised at following versions.

50. Explain about AB Initio’s dependency analysis?


Answer: Dependency analysis in AB Initio is closely associated with data linage. Data linage
provides the source for data and upon the implementation of dependency analysis, the type of
applications dependent on the data can be identified. Dependency analysis also helps to carry out
maximum retrieval operation (from existing data) by the use of a surrogate key. New records can
be generated when using the scan or next_in_sequence/reformat sequence.

51. Informatica vs ab initio?


Answer: Feature AB Initio Informatica
About Tool Code based ETL Engine based ETL
Parallelism Supports One Types of parallelism Supports three types of parallelism
Scheduler No scheduler Schedule through script available
Error Handling Can attach error and reject files One file for all
Robust Robustness by function comparison Basic in terms of robustness
Feedback Provides performance metrics for each component executed Debug mode, but slow
implementation
Delimiters while reading Supports multiple delimiters Only dedicated delimeter

52. What are the benefits of data analyzing?


Answer: It makes sure of the following:
1. Explanation of development related to the core tasks can be assured
2. Test Hypotheses with an integration approach is always there
3. Pattern detection in a reliable manner

53. What are the key elements of a data processing system?


Answer: These are Converter, Aggregator, Validator, Analyzer, Summarizer, and a sorter.

54. What are the facts that can compromise data integrity?

Answer: There are several errors that can cause this issue and can transform many other
problems. These are:

1. Bugs and malwares


2. Human error
3. Hardware error
4. Transfer errors which generally include data compression beyond a limit.

55. What does EDP stand for?


Answer: It means Electronic Data Processing
56. Give one reason when you need to consider multiple data processing?
Answer: When the required files are not the complete outcomes which are required and need
further processing.

57. Can sorting and storing be done through single software or you need different for these
approaches?
Answer: Well, it actually depends on the type and nature of the data. Although it is possible to
accomplish both these tasks through the same software, many software has their own
specialization and it would be good if one adopts such an approach to get the quality outcomes.
There are also some pre-defined set of modules and operations that largely matters. If the
conditions imposed by them are met, users can perform multiple tasks with similar software. The
output file is provided in the various formats.

You might also like