Qdoc - Tips Datastage-Material
Qdoc - Tips Datastage-Material
Qdoc - Tips Datastage-Material
DATA STAGE
2/12/2011
DATA STAGE
Page 2
DATA STAGE
Page 2
DATA STAGE
Datstage is an ETL Tool and it is client-server technology and integrated toolset used
for designing, running, monitoring and administrating the “data acquisition”
application is known as “job”.
When I was installed Data stage software in our personal PC it’s automatically
aut omatically comes in
our PC is having 4 components in blue color like DATASTAGE ADMINISTRATOR,
DATASTAGE DESIGNER, DATASTAGE DIRECTOR, DATASTAGE MANAGER. These are the
client components.
DS Client components:-
This components will be used for to perform create or delete the projects. , cleaning
metadata stored in repository and install NLS.
Page 3
DATA STAGE
It is used to validate, schedule, run and monitor the Data stage jobs.
It is used to create the Datstage application known as job. The following activities can
be performed with designer window.
It is one of the server side components which is defined to store the information
about to build out Data Ware House.
This is defined to execute the job while we are creating Data stage jobs.
Page 4
DATA STAGE
Parallel jobs:-
Pipeline parallism.: - the data flow continuously throughout it pipeline . All stages in
the job are Operating simultaneously.
For example, my source is having 4 records as soon as first record starts processing, then all
remaining records processing simultaneously.
Partition parallism: - in this parallism, the same job would effectively be run simultaneously
by several processors. Each processors handles separate subset of total records.
For example, my source is having 100 records and 4 partitions. The data will be equally
partition across 4 partitions that mean the partitions will get 25 records. Whenever the first
partition starts, the remaining three partitions start processing simultaneously and parallel.
Page 5
DATA STAGE
Round Robin:- the first record goes to first processing node, second record goes to the second
processing node and so on….. This method is useful for creating equal size of partition.
Hash:- The records with the same values for the hash-key field given to the same processing node.
Modulus:- This partition is based on k ey column module. This partition is similar to hash partition.
Random:- The records are randomly distributed across all processing nodes.
Range:- The related records are distributed across the one node . The range is specified based on
key column.
Auto:- This is most common method. The data stage determines the best partition method to use
depending upon the type of stage.
File stages
Note:- All file stage are passive stages means which defines just to read or write access only.
it is one of the file stages which it can be used to reading the data from file or writing the data
to file. It can support single input link or single output link and as well as reject link.
Page 6
DATA STAGE
Dataset:-
It is also one of the file stages which it can be used to store the data on internal format, it is
related operating system. So, it will take less time to read or write the data.
Page 7
DATA STAGE
File Set:-
It is also one of the file stage which it can be used to read or write the data on file set. The file
it can be saved with the extension of “.fs”. it operating p arallel
Page 8
DATA STAGE
Dataset is an internal format of Data Stage the main points to be considered about dataset before
using are:
1) It stores data in binary in the internal format of Data Stage so, it takes less time to read/write
from dataset than any other source/target.
2) It preserves the partioning schemes so that you don't have to partition it a gain.
3) You cannot view data without data stage
Page 9
DATA STAGE
When you use sequential file as Source, at the time of Compilation it will convert to native
format from ASCII.where as, when you go for using datasets... conversion is not required. Also, by
default sequential files we be Processed in sequence only. Sequential files can accommodate up to
2GB only.sequentilal files does not support NULL values....all the above can me overcome using
dataset
Stage....but selection is depends on the Re quirement....suppose if you want to capture rejected
data....in the case you need to use sequential file or file set stage....
Seq file is used Extract the from flat files and load into flat files and limit is 2GB
Dataset is a intermediate stage and it has parallism when load data into dataset and it improve the
performance.
This file is used to read the data form Mainframe file. By using CFF we can read ASCII or EBCDIC
(Extended Binary coded Decimal Interchange Code) data. We can select the required columns and
can omit the remaining. We can collect the rejects (bad formatted records) by setting the property
reject to save (other options: continue fail). We can flatten the arrays (COBOL files).
Page 10
DATA STAGE
Processing Stages
Aggregator stage:-
It is one the processing stage which it can be used to perform the summaries for the group
of input data. It can support single input link which carries the input data and it can support single
out put link which carries aggregated data to output link.
When I was go for properties for aggregator stage..Double click on aggregate stage then it will
show…
Page 11
DATA STAGE
Copy stage:-
It is also one of the processing stages which it can be used just to copy the input data to number
of output links. It can support single input link and number of output links.
When I was go for the properties of the copy stage. Double click on copy stage It will show like…
Page 12
DATA STAGE
Filter Stgae:-
it is also one of the processing stage which it can be used tooo perform the filter the data
based on given condition. It can support single input l ink and „n‟ no of output links and optinally it
support one reject link.
When I was go for propertief of filter stage. Double click on f ilter stage it will show…
Page 13
DATA STAGE
Switch stage:- it is also one of the processing stage which it can be used to filter the input data
based on given conditions. It can support single input link and 128 output links .
When I was go for properties of switch stage.. double click on switch stage..
Page 14
DATA STAGE
Both stages functionality and responsibilities is same. But the difference way of execution like..
In filter stage, we have to give the multiple conditions, on multiple columns. But every time data
come form source system and filter the data and loads into target.
In switch stage, we have to give the multiple conditions on single column, but all data come form
source only once and check all the condition in the switch stage and loads into target.
Join stage:-
It is also one of the processing stages which can be used to combine two or more input datasets
based on key field. It can support two or more input datasets and one output dataset, and doesn‟t
support reject link.
Join can be performing inner join, left outer, right outer, full-outer joins.
Inner join means to display the matched re cords from both the side tables.
Left-outer join means to show the matched records from both sides as well as unmatched records
from left side table.
Right-outer join means to show the matched records from both sides as well as unmatched records
from Right side table.
Full-outer join means to show the matched as well as unmatched records from both sides.
When I was go to the properties of join stage. Double click on join stage…
Page 15
DATA STAGE
Merge stage:-
It is also one of the processing stages which it can be used to merge the multiple input data. It
can support multiple input links, the first input link is called “master input link” and remaining
links are called “Updated links”.
Page 16
DATA STAGE
When I was go to the properties of the merge stage..Double click on merge stage…
Page 17
DATA STAGE
Q ) On which case inner join perform and on which case left -outer join perform?
In merge stage it is having one property is here. To see the above picture..
If Unmatched Master Mode= keep then it will be perform left -outer join.
Look-up stage:-
This is also one of the processing stages which can be used to look -up on relational tables.
It can support multiple input links and single output link and support single reject link.
When I was go for properties of look-up stage. Double click on look-up stage, it will show like…
Page 18
DATA STAGE
To see observe picture it is having one icon is – constraints. Double click on that icon it will
show one window is look like…
Page 19
DATA STAGE
Lookup Failure.
Default it is having fail. If the condition not met =“drop”, it will b e perform inner join. If the
condition not met option = continue, it will be perform left-outer join.
Default it having lookup Failure=Fail. If it is having continue option will be support Reject link.
1. Input requirements
2. Treatment of unmatched records
3. Memory usage
1. Input requirements :-
Join will be support two or more input links and single output link doesn‟t support
reject link. In case of merge, it can support multiple input links and multiple output links
and also support reject links same as updated links. in case of lookup, it supports
multiple input links and single output link as well as one reject link.
Join support 4 types of joins like inner, left-outer, right-outer, and full-outer. In
case of merge it will support inner as well as left-outer only. In case of lookup, it will be
support inner as well as left-outer.
Join doesn‟t get any unmatched records because of doesn‟t support reject link.
In case of merge, it doesn‟t catch the unmatched master records on mater link.
Each and every update unmatched records go to corresponding update rejects links.
4. Memory usage:-
If the reference dataset is larger than physical memory then we can go for join stage
for better performance.
If the reference dataset is smaller than physical memory then it recommended to use
lookup.
Page 20
DATA STAGE
Funnel stage:- it is also one of Active processing stage which can be used to combined the
multiple input datasets into single output datasets.
Remove Duplicate Stage:- it is also one of processing stage which it can be used to remove
the duplicates data based on key field.
Page 21
DATA STAGE
Sort stage:-
It is also one of the processing stage which can be used to sort data based on key field,
either ascending order or descending order.
Page 22
DATA STAGE
Modify Stage:-
It is also one of the processing stages which it can be used to when you are able to handle Null
handling and Data type changes. It is used to change the data types if the source contains the
varchar and the target contains integer then we have to use this Modify Stage and we have to
change according to the requirement. And we can do some modification in length also.
Page 23
DATA STAGE
Pivot stage:-
Which can be used to make many people have the following misconceptions about Pivot stage?
1) It converts rows into columns
2) By using a pivot stage; we can convert 10 rows into 100 columns and 100 columns into 10 rows
3) You can add more points here!!
Let me first tell you that a Pivot stage only CONVERTS COLUMNS INTO ROWS and nothing else. Some
DS Professionals refer to this as NORMALIZATION. Another fact about the Pivot stage is that it's
irreplaceable i.e. no other stage has this functionality of converting columns into rows!!! So, that
makes it unique, doesn't!!!
Let's cover how exactly it does it....
For example, lets take a file with the following fields: sno, sname, m1, m2, m3
Basically you would use a pivot stage when u need to convert those 3 fields like m1,m2,m3 into a
single field marks which contains a unique value per row...i.e. You would need the following
Page 24
DATA STAGE
output
Page 25
DATA STAGE
It is also one important stage on processing stage which it can be used to generate the
sequence numbers while implementing slowly changing dimension. It is a system generated key on
dimensional tables.
Surrogate key is an artificial identifier for an entity. in Surrogate key are generated by the
system sequentially. Primary key is a natural identifier for an entity. In primary key are all the
values are entered manually by the are uniquely identifier there will be no repletion of data.
Transformer Stage:-
Page 26
DATA STAGE
It is an active processing stage which allows filtering the data based on given
condition and can derive new data definitions by dev eloping an expression. This stage uses
Microsoft .net framework environment for it‟s compilation.
The transformer stage can be performing data cleaning and d ata scrubbing operation. It can
have single input link and number of output links and also reject link.
Stage Variable - An intermediate processing variable that retains value during read and doesn‟t pass
the value into target column.
Page 27
DATA STAGE
Constraints - Conditions that are either true or false that specifies flow of data with a link.
When I was click stage properties on Transformer Stage, it will show one window look like…
1) Stage Variables.
2) Derivations.
3) Constraints.
This is also one of the active processing stage which it can be used to capture the
changes between two sources like After and Before. The source which is used as reference to
capture the change is called after dataset. The source which we are looking for the change is called
before dataset. The change code will be added in output dataset. So, by this change code will be
recognizing delete, insert or update.
Page 28
DATA STAGE
Page 29
DATA STAGE
it produces set of data fitting specified meta data. It is useful where you want to test your
job but have no real data available to process. It is having no input links and a single output link.
Page 30
DATA STAGE
This stage adds the columns to incoming data and generates mock data for these columns for each
data row processed. It can have single input link and single output li nk.
Page 31
DATA STAGE
Input data:-
Output data:-
Page 32
DATA STAGE
Head Stage:-
This stage helpful for testing .and debug the application with large datasets. T his stage selects
TOP N rows from the input dataset and copies the selected rows to an output datasets. It can have
a single input link and single output link.
Tail Stage:-
This stage helpful for testing and debug the application with large datasets. This stage selects
BOTTOM N rows from the input dataset and copies the selected rows to an output datasets. It can
have a single input link and single output link.
Page 33
DATA STAGE
Sample Stage:-
This stage will be having single input link and any number of output link when operating percent
or period mode.
Page 34
DATA STAGE
Peek Stage:-
it can have a single input link and any number output link. It can be used to print the record
column values to the job log view.
Page 35
DATA STAGE
Mock data:-
Page 36
DATA STAGE
SCD type-2 is the common problems in DWH. It is to maintain the history information for particular
organization in target. So, for every update in the source, it insert new record in target.
In this implementation, it is having two input datasets like before and after datasets, these two
are connected to change capture stage which is connected to transformer stage which is having two
output links like insert link and update link. The insert link is connected to stored procedure stage
which is connected to transformer which is connected to target stage. And also other output link
(update link) of the transformer stage which is joined with the target stage while removing records
by using remove duplicate stage. The output link of the join stage which is connected to
transformer stage which is connected to target update stage.
For example, the source is having EMP table with 100 records. When I was run the job, the
records was initially loaded into target insert stage , how it means, First compare two input
datasets, in first time there is no change in the records. So, the change capture stage gives the
change code=1. The transformer stage transforms the records from source to target by generating
sequence to the records by using stored procedure stage.
If any updation is occurred at source, that updation records will be stored to target side
(TGT_UPDATE). How it will be store means, first two compare two input datasets, changes is
occurred at source level then change capture stage gives the change code=3 . by using this change
code, the transformer stage transform the records to join stage through the update link. Join stage
joins the updated records and target records by removing duplicate records using remove duplicate
stage. The output of the join stage to connected to transform stage which was transforming update
records to target update stage.
Basically Environment variable is predefined variable those we can use while creating DS
job. We create/declare these variables in DS A dministrator. While designing the job we set the
properties for these variables. Environmental variables are also called as Global variables.
1. Local Variables
2.Environmental variables/Global Variables
Page 37
DATA STAGE
Give me to you some example for environment variable. So that it will be more clear for us.
Example is
you want to connect to database you need use id , password and schema.
These are constant through out the project so they w ill be created as environment variables.
By using this if there is any change in password or schema no need to worry about all the jobs.
Change it at the level of environment variable that will take care of all the jobs.
There is an icon to go to Job parameters in the tool bar. Or you can press Ctrl+J to enter into
Job Parameters dialog box. Once you enter give a parameter name and corresponding default value
for it. This helps to enter the value when you run the job. Its not necessary always to open the job
to change the parameter value. Also when the job runs through script its just enough to give t he
parameter value in the command line of script. Else you have to change the value in the job
compile and then run in the script. So its easy for the users to handle the jobs using parameters.
20) What is difference between version Data stage 7.5 and 8.0.1?
1. In data stage 7.5.2 we have manager as client. in 8.0.1 we don‟t have any manager client. the
manager client is embedded in d esigner client.
2. In 7.5.2 quality stage has separate designer .in 8.0.1 quality stage is integrated in designer.
3. In 7.5.2 code and metadata is stored in file based system. in 8.0.1 code is a file based system
where as metadata is stored in database.
6. In 7.5.2 a single join stage can't support multiple references. In 8.0.1 a single join stage can
support
Multiple references.
7. In 7.5.2, when a developer opens a particular job, and another developer wants to open the
same job, that job can‟t be opened. in 8.0.1 it can be possible when a developer opens a particular
job and another developer wants to open the same job then it can be opened as read only job.
Page 38
DATA STAGE
9. In 8.0.1 quick find and advance find features are available, in 7.5.2 not available
10. In 7.5.2 first time one job is run and surrogate key s Generated from initial to n value. Next
time the same job is compiled and run again surrogate key is generated from Initial to n.
Automatic increment of surrogate key is not in 7.5.2. But in 8.0.1 surrogate key is incremented
automatically. a state fiyle is used to store the maximum value of surrogate key.
Reject-link is defined and reject data is loaded back into DWH. So reject link has to be defined
every output link you wish to collect rejected data. Reject data typically a bad data like duplicates
of primary keys or null-rows where data is expected.
Q) What are Routines and where/how are they written and have written any routine before?
It didn‟t use Routines at any time in my project. But I know the routines. Routines are stored
in the routine Branch of the DS Repository. Where you can create, view or Edit. The following
different types of routines.
1) Transform Function
2) Before-after Sub routines
3) Job control routines.
It is used to handle the metadata which will be very useful for data lineage and data analysis
later on. Meta data is type of data we are handling. These data definitions are stored in repository
and can be accessed with the use of Meta stage.
Quality stage can be integrated with data stage. In quality stage we have many stages like
investigate, match, survivorship, like that we can do the quality related works and we can
integrate with the data stage we need quality stage plug-in to achieve the task.
There is a stage named Stored Procedure available in Data stage palette under Database
category. You can use that stage to call your procedure in Data stage jobs.
Controlling Data stage jobs through some other Data stage jobs. Ex: Consider two Jobs XXX and
YYY. The Job YYY can be executed from Job XXX by using Data stage macros in Routines.
To Execute one job from other job, following steps needs to be followed in Routines.
Page 39