IBM InfoSphere DataStage and QualityStage Version 11 Release 3 Designer Client Guide
IBM InfoSphere DataStage and QualityStage Version 11 Release 3 Designer Client Guide
Version 11 Release 3
SC19-4272-00
IBM InfoSphere DataStage and QualityStage
Version 11 Release 3
SC19-4272-00
Note
Before using this information and the product that it supports, read the information in “Notices and trademarks” on page
259.
Contents v
vi Designer Client Guide
Chapter 1. Tutorial: Designing your first job
This exercise walks you through the creation of a simple job.
The aim of the exercise is to get you familiar with the Designer client, so that you
are confident to design more complex jobs. There is also a dedicated tutorial for
parallel jobs, which goes into more depth about designing parallel jobs.
In this exercise you design and run a simple parallel job that reads data from a text
file, changes the format of the dates that the file contains, and writes the
transformed data back to another text file.
The source text file contains data from a wholesaler who deals in car parts. It
contains details of the wheels they have in stock. The data is organized in a table
that contains approximately 255 rows of data and four columns. The columns are
as follows:
CODE The product code for each type of wheel.
DATE The date new wheels arrived in stock (given as year, month, and day).
PRODUCT
A text description of each type of wheel.
QTY The number of wheels in stock.
The job that you create will perform the following tasks:
1. Extract the data from the file.
2. Convert (transform) the data in the DATE column from a complete date
(YYYY-MM-DD) to a year and month (YYYY, MM) stored as two columns.
3. Write the transformed data to a new text file that is created when you run the
job.
The following table shows a sample of the source data that the job reads.
The following table shows the same data after it has been transformed by the job.
Learning objectives
As you work through the exercise, you will learn how to do the following tasks:
v Set up your project.
v Create a new job.
v Develop the job by adding stages and links and editing them.
v Compile the job.
v Run the job.
Time required
System requirements
Prerequisites
The Designer client is the tool that you use to set up your project, and to create
and design your job. The Designer client provides the tools for creating jobs that
extract, transform, load, and check the quality of data. The Designer client is like a
workbench or a blank canvas that you use to build jobs. The Designer client
palette contains the tools that form the basic building blocks of a job:
v Stages connect to data sources to read or write files and to process data.
v Links connect the stages along which your data flows.
The Designer client uses a repository in which you can store the objects that you
create during the design process. These objects can be reused by other job
designers.
Before you create your job, you must set up your project by entering information
about your data. This information includes the name and location of the tables or
files that contain your data, and a definition of the columns that the tables or files
contain. The information, also referred to as metadata, is stored in table definitions
in the repository. The easiest way to enter a table definition is to import it directly
from the source data. In this exercise you will define the table definition by
importing details about the data directly from the data file.
Lesson checkpoint
In this lesson you defined a table definition.
When a new project is installed, the project is empty and you must create the jobs
that you need. Each job can read, transform, and load data, or cleanse data. The
number of jobs that you have in a project depends on your data sources and how
often you want to manipulate data.
In this lesson, you create a parallel job named Exercise and save it to a new folder
in the Jobs folder in the repository tree.
You have created a new parallel job named Exercise and saved it in the folder
Jobs\My Jobs in the repository.
Lesson checkpoint
In this lesson you created a job and saved it to a specified place in the repository.
Ensure that the job named Exercise that you created in the previous lesson is open
and active in the job design area. A job is active when the title bar is dark blue (if
you are using the default Windows colors). A job consists of stages linked together
that describe the flow of data from a data source to a data target. A stage is a
graphical representation of the data itself, or of a transformation that will be
performed on that data. The job that you are designing has a stage to read the
data, a stage to transform the data, and a stage to write the data.
Adding stages
This procedure describes how to add stages to your job.
1. In the Designer client palette area, click the File bar to open the file section of
the palette.
2. In the file section of the palette, select the Sequential File stage icon and drag
the stage to your open job. Position the stage on the right side of the job
window.
The figure shows the file section of the palette.
3. In the file section of the palette, select another Sequential File stage icon and
drag the stage to your open job. Position the stage on the left side of the job
window.
4. In the Designer client palette area, click the Processing bar to open the
Processing section of the palette.
5. In the processing section of the palette, select the Transformer stage icon and
drag the stage to your open job. Position the stage between the two Sequential
File stages.
The figure shows the Processing section of the palette.
Adding links
This procedure describes how to add links to your job.
1. Right-click on the Sequential File stage on the left of your job and hold the
right button down. A target is displayed next to the mouse pointer to indicate
that you are adding a link.
2. Drag the target to the Transformer stage and release the mouse button. A black
line, which represents the link, joins the two stages.
Note: If the link is displayed as a red line, it means that it is not connected to
the Transformer stage. Select the end of the link and drag it to the Transformer
stage and release the link when it turns black.
3. Repeat steps 1 and 2 to connect the Transformer stage to the second Sequential
File stage.
4. Select File > Save to save the job.
Rename your stages and links with the names suggested in the table. This
procedure describes how to name your stages and links.
1. Select each stage or link.
2. Right-click and select Rename.
Your job should look like the one in the following diagram:
Lesson checkpoint
You have now designed you first job.
You configure the job by opening the stage editors for each of the stages that you
added in the previous lesson and adding details to them. You specify the following
information:
v The name and location of the text file that contains the source data.
v The format of the data that the job will read.
v Details of how the data will be transformed.
v A name and location for the file that the job writes the transformed data to.
You will configure the Sequential File stage so that it will read the data from the
data file and pass it to the Transformer stage.
13. Click Close to close the Data Browser and OK to close the stage editor.
You have configured the Transformer stage to read the data passed to it from the
Sequential File stage, and transform the data to split it into separate month and
year fields, and then pass the data to the target Sequential File stage.
Lesson checkpoint
In this lesson, you configured your job.
Ensure that the job named Exercise that you created in the previous lesson is open
and active in the job design area.
Lesson checkpoint
In this lesson you compiled your job.
You run the job from the Director client. The Director client is the operating
console. You use the Director client to run and troubleshoot jobs that you are
developing in the Designer client. You also use the Director client to run fully
developed jobs in the production environment.
You use the job log to help debug any errors you receive when you run the job.
2. Select your job in the right pane of the Director client, and select Job > Run
Now
3. In the “Job Run Options” window, click Run.
4. When the job status changes to Finished, select View > Log.
5. Examine the job log to see the type of information that the Director client
reports as it runs a job. The messages that you see are either control or
information type. Jobs can also have Fatal and Warning messages. The
following figure shows the log view of the job.
There are three different types of job in InfoSphere DataStage, depending on what
edition or editions you have installed:
v Parallel jobs. These run on InfoSphere DataStage servers that are SMP, MPP, or
cluster systems.
v Server jobs. They run on the InfoSphere DataStage Server, connecting to other
data sources as necessary.
v Mainframe jobs. Mainframe jobs are uploaded to a mainframe, where they are
compiled and run.
Note: Mainframe jobs are not supported in this version of IBM InfoSphere
Information Server.
There are two other entities that are similar to jobs in the way they appear in the
Designer, and are handled by it. These are:
v Shared containers. These are reusable job elements. They typically comprise a
number of stages and links. Copies of shared containers can be used in any
number of server jobs and parallel jobs and edited as required. Shared
containers are described in “Shared containers” on page 97.
v Job Sequences. A job sequence allows you to specify a sequence of InfoSphere
DataStage server or parallel jobs to be executed, and actions to take depending
on results. Job sequences are described in Chapter 14, “Building sequence jobs,”
on page 207.
Creating a job
You create jobs in the Designer client.
Procedure
1. Click File > New on the Designer menu. The New dialog box appears.
2. Choose the Jobs folder in the left pane.
3. Select one of the icons, depending on the type of job or shared container you
want to create.
4. Click OK.
The Diagram window appears, in the right pane of the Designer, along with the
palette for the chosen type of job. You can now save the job and give it a name.
The Open dialog box is displayed. This allows you to open a job (or any other
object) currently stored in the repository.
Procedure
1. Select the folder containing the job (this might be the Job folder, but you can
store a job in any folder you like).
2. Select the job in the tree.
3. Click OK.
Results
You can also find the job in the Repository tree and double-click it, or select it and
choose Edit from its shortcut menu, or drag it onto the background to open it.
The updated Designer window displays the chosen job in a Diagram window.
Saving a job
Save jobs in order to retain all parameters that you specified and reuse them in the
future.
Procedure
1. Choose File > Save. The Save job as dialog box appears:
2. Enter the name of the job in the Item name field.
3. Select a folder in which to store the job from the tree structure by clicking it. It
appears in the Folder path box. By default jobs are saved in the pre-configured
Job folder, but you can store it in any folder you choose.
4. Click OK. If the job name is unique, the job is created and saved in the
Repository. If the job name is not unique, a message box appears. You must
acknowledge this message before you can enter an alternative name (a job
name must be unique within the entire repository, not just the selected folder).
Results
To save an existing job with a different name choose File Save As... and fill in the
Save job as dialog box, specifying the new name and the folder in which the job is
to be saved.
Naming a job
The following rules apply to the names that you can give IBM InfoSphere
DataStage jobs.
Procedure
v Job names can be any length.
v They must begin with an alphabetic character.
v They can contain alphanumeric characters and underscores.
Results
Job folder names can be any length and consist of any characters, including spaces.
Stages
A job consists of stages linked together which describe the flow of data from a data
source to a data target (for example, a final data warehouse).
A stage usually has at least one data input or one data output. However, some
stages can accept more than one data input, and output to more than one stage.
The different types of job have different stage types. The stages that are available
in the Designer depend on the type of job that is currently open in the Designer.
Stages and links can be grouped in a shared container. Instances of the shared
container can then be reused in different parallel jobs. You can also define a local
container within a job; this groups stages and links into a single unit, but can only
be used within the job in which it is defined.
Each stage type has a set of predefined and editable properties. These properties
are viewed or edited using stage editors. A stage editor exists for each stage type.
These stages are either passive or active stages. A passive stage handles access to
databases for the extraction or writing of data. Active stages model the flow of
data and provide mechanisms for combining data streams, aggregating data, and
converting data from one data type to another.
The Palette organizes stage types into different groups, according to function:
v General
v Database
v File
v Processing
v Real Time
Stages and links can be grouped in a shared container. Instances of the shared
container can then be reused in different server jobs (such shared containers can
also be used in parallel jobs as a way of leveraging server job functionality). You
can also define a local container within a job, this groups stages and links into a
single unit, but can only be used within the job in which it is defined.
Each stage type has a set of predefined and editable properties. These properties
are viewed or edited using stage editors. A stage editor exists for each stage type.
Note: Mainframe jobs are not supported in this version of IBM InfoSphere
Information Server.
The Palette organizes stage types into different groups, according to function:
v General
v Database
v File
v Processing
Each stage type has a set of predefined and editable properties. Some stages can be
used as data sources and some as data targets. Some can be used as both.
Processing stages read data from a source, process it and write it to a data target.
These properties are viewed or edited using stage editors. A stage editor exists for
each stage type.
The following rules apply to the names that you can give IBM InfoSphere
DataStage stages and shared containers:
v Names can be any length.
v They must begin with an alphabetic character.
Links
Links join the various stages in a job together and are used to specify how data
flows when the job is run.
The read/write link to the data source is represented by the stage itself, and
connection details are given in the stage properties.
Input links typically carry data to be written to the data target. Output links carry
metadata that is read from the data source. The column definitions on an input
link define the data to be written to a data target. The column definitions on an
output link define the data to be read from a data source.
Processing stages generally have an input link carrying data to be processed, and
an output link passing on processed data.
Column definitions actually belong to, and travel with, the links that connect
stages. When you define column definitions for the output link of a stage, those
same column definitions are used as input to another stage. If you move either end
of a link to another stage, the column definitions are used in the stage that you
connect to. If you change the details of a column definition at one end of a link,
those changes are reflected in the column definitions at the other end of the link.
The type of link that you use depends on whether the link is an input link or an
output link, and on which stages you are linking. IBM InfoSphere DataStage
parallel jobs support three types of links:
Stream
Stream links represents the flow of data from one stage to another. Stream
links are used by all stage types.
Reference
Reference links represent a table lookup. Reference links can be input to
Lookup stages only, and send output to other stages.
Reject Reject links represent output records that are rejected because they do not
meet a specific criteria. Reject links derive their metadata from the
associated output link, so the metadata cannot be edited.
You can typically have only an input stream link or an output stream link on a File
stage or Database stage. The three link types are displayed differently in the
Designer Diagram window: stream links are represented by solid lines, reference
links by dotted lines, and reject links by dashed lines.
Link marking
For parallel jobs, metadata is associated with the links that connect stages. If you
enable link marking, a small icon is added to the link to indicate whether metadata
is currently associated with it.
Link marking is enabled by default. To disable link marking, click the link markers
icon ( ) in the Designer client toolbar, or right-click the job canvas and click
Show link marking.
Unattached links
You can add links that are only attached to a stage at one end, although they will
need to be attached to a second stage before the job can successfully compile and
run.
Unattached links are shown in a special color (red by default - but you can change
this using the Options dialog).
By default, when you delete a stage, any attached links and their metadata are left
behind, with the link shown in red. You can choose Delete including links from the
Edit or shortcut menus to delete a selected stage along with its connected links.
Input links connected to the stage generally carry data to be written to the
underlying data target. Output links carry data read from the underlying data
source. The column definitions on an input link define the data that will be written
to a data target. The column definitions on an output link define the data to be
read from a data source.
An important point to note about linking stages in server jobs is that column
definitions actually belong to, and travel with, the links as opposed to the stages.
When you define column definitions for a stage's output link, those same column
definitions will appear at the other end of the link where it is input to another
stage. If you move either end of a link to another stage, the column definitions will
appear on the new stage. If you change the details of a column definition at one
end of a link, those changes will appear in the column definitions at the other end
of the link.
There are rules covering how links are used, depending on whether the link is an
input or an output and what type of stages are being linked.
IBM InfoSphere DataStage server jobs support two types of input link:
v Stream. A link representing the flow of data. This is the principal type of link.
v Reference. A link representing a table lookup. They are used to provide
information that might affect the way data is changed, but do not supply the
data to be changed.
The two link types are displayed differently in the Designer Diagram window:
stream links are represented by solid lines and reference links by dotted lines.
There is only one type of output link, although some stages permit an output link
to be used as a reference input to the next stage and some do not.
Link marking
For server jobs, metadata is associated with a link, not a stage. If you have link
marking enabled, a small icon attaches to the link to indicate if metadata is
currently associated with it.
Link marking is enabled by default. To disable it, click on the link mark icon in the
Designer toolbar, or deselect it in the Diagram menu, or the Diagram shortcut
menu.
Unattached links
You can add links that are only attached to a stage at one end, although they will
need to be attached to a second stage before the job can successfully compile and
run.
Unattached links are shown in a special color (red by default - but you can change
this using the Options dialog).
By default, when you delete a stage, any attached links and their metadata are left
behind, with the link shown in red. You can choose Delete including links from the
Edit or shortcut menus to delete a selected stage along with its connected links.
Note: Mainframe jobs are not supported in this version of IBM InfoSphere
Information Server.
Links to and from source and target stages are used to carry data to or from a
processing or post-processing stage.
For source and target stage types, column definitions are associated with stages
rather than with links. You decide what appears on the outputs link of a stage by
selecting column definitions on the Selection page. You can set the Column Push
Option to specify that stage column definitions be automatically mapped to output
columns (this happens if you set the option, define the stage columns then click
OK to leave the stage without visiting the Selection page).
There are rules covering how links are used, depending on whether the link is an
input or an output and what type of stages are being linked.
Mainframe stages have only one type of link, which is shown as a solid line. (A
table lookup function is supplied by the Lookup stage, and the input links to this
which acts as a reference is shown with dotted lines to illustrate its function.)
Link marking
For mainframe jobs, metadata is associated with the stage and flows down the
links. If you have link marking enabled, a small icon attaches to the link to
indicate if metadata is currently associated with it.
Link marking is enabled by default. To disable it, click on the link mark icon in the
Designer toolbar, or deselect it in the Diagram menu, or the Diagram shortcut
menu.
Unattached links
Unlike server and parallel jobs, you cannot have unattached links in a mainframe
job; both ends of a link must be attached to a stage.
If you delete a stage, the attached links are automatically deleted too.
Link ordering
The Transformer stage in server jobs and various processing stages in parallel jobs
allow you to specify the execution order of links coming into or going out from the
stage.
When looking at a job design in IBM InfoSphere DataStage, there are two ways to
look at the link execution order:
v Place the mouse pointer over a link that is an input to or an output from a
Transformer stage. A ToolTip appears displaying the message:
Input execution order = n
for input links, and:
Output execution order = n
Naming links
Specific rules apply to naming links.
The following rules apply to the names that you can give IBM InfoSphere
DataStage links:
v Link names can be any length.
v They must begin with an alphabetic character.
v They can contain alphanumeric characters and underscores.
Stages are added and linked together using the palette. The stages that appear in
the palette depend on whether you have a server, parallel, or mainframe job, or a
job sequence open, and on whether you have customized the palette.
You can add, move, rename, delete, link, or edit stages in a job design.
Adding stages
There is no limit to the number of stages you can add to a job.
When you insert a stage by clicking (as opposed to dragging) you can draw a
rectangle as you click on the Diagram window to specify the size and shape of the
stage you are inserting as well as its location.
Each stage is given a default name which you can change if required.
If you want to add more than one stage of a particular type, press Shift after
clicking the button on the tool palette and before clicking on the Diagram window.
You can continue to click the Diagram window without having to reselect the
button. Release the Shift key when you have added the stages you need; press Esc
if you change your mind.
Moving stages
After they are positioned, stages can be moved by clicking and dragging them to a
new location in the Diagram window.
If you have the Snap to Grid option activated, the stage is attached to the nearest
grid position when you release the mouse button. If stages are linked together, the
link is maintained when you move a stage.
Renaming stages
Stages can be renamed in the stage editor or the Diagram window.
Deleting stages
Stages can be deleted from the Diagram window.
A message box appears. Click Yes to delete the stage or stages and remove them
from the Diagram window. (This confirmation prompting can be turned off if
required.)
Linking stages
You can link stages in a job design.
Moving links
Once positioned, you can move a link to a new location in the Diagram window.
You can choose a new source or destination for the link, but not both.
Procedure
1. Click the link to move in the Diagram window. The link is highlighted.
2. Click in the box at the end you want to move and drag the end to its new
location.
Results
In server and parallel jobs you can move one end of a link without reattaching it
to another stage. In mainframe jobs both ends must be attached to a stage.
Deleting links
Links can be deleted from the Diagram window.
A message box appears. Click Yes to delete the link. The link is removed from the
Diagram window.
Renaming links
You can rename a link.
Resize stages by selecting each stage and dragging on one of the sizing handles in
the bounding box.
Editing stages
After you add the stages and links to the Diagram window, you must edit the
stages to specify the data you want to use and any aggregations or conversions
required.
Data arrives into a stage on an input link and is output from a stage on an output
link. The properties of the stage and the data on each input and output link are
specified using a stage editor.
A dialog box appears. The content of this dialog box depends on the type of stage
you are editing. See the individual stage descriptions for details.
The data on a link is specified using column definitions. The column definitions
for a link are specified by editing a stage at either end of the link. Column
definitions are entered and edited identically for each stage type.
The column definitions are displayed in a grid on the Columns tab for each link.
The Columns grid has a row for each column definition. The columns present
depend on the type of stage. Some entries contain text (which you can edit) and
others have a drop-down list containing all the available options for the cell.
You can edit the grid to add new column definitions or change values for existing
definitions. Any changes are saved when you save your job design.
The Columns tab for each link also contains the following buttons which you can
use to edit the column definitions:
v Save... . Saves column definitions as a table definition in the Repository.
v Load... . Loads (copies) the column definitions from a table definition in the
Repository.
To edit a column definition in the grid, click the cell you want to change then
choose Edit cell... from the shortcut menu or press Ctrl-E to open the Edit Column
Metadata dialog box.
To add a new column at the bottom of the grid, edit the empty row.
To add a new column between existing rows, position the cursor in the row below
the desired position and press the Insert key or choose Insert row... from the
shortcut menu.
After you define the new row, you can right-click on it and drag it to a new
position in the grid.
Naming columns
The rules for naming columns depend on the type of job the table definition will
be used in:
Server jobs
Column names can be any length. They must begin with an alphabetic character or
$ and contain alphanumeric, underscore, period, and $ characters.
Column names can be any length. They must begin with an alphabetic character or
$ and contain alphanumeric, underscore, and $ characters.
Mainframe jobs
Column names can be any length. They must begin with an alphabetic character
and contain alphanumeric, underscore, #, @, and $ characters.
Unwanted column definitions can be easily removed from the Columns grid. To
delete a column definition, click any cell in the row you want to remove and press
the Delete key or choose Delete row from the shortcut menu. Click OK to save
any changes and to close the Table Definition dialog box.
To delete several column definitions at once, hold down the Ctrl key and click in
the row selector column for the rows you want to remove. Press the Delete key or
choose Delete row from the shortcut menu to remove the selected rows.
Each table definition has an identifier which uniquely identifies it in the repository.
This identifier is derived from:
v Data source type. This describes the type of data source holding the actual table
the table definition relates to.
v Data source name. The DSN or equivalent used when importing the table
definition (or supplied by the user where the table definition is entered
manually).
v Table definition name. The name of the table definition.
In previous releases of IBM InfoSphere DataStage all table definitions were located
in the Table Definitions category of the repository tree, within a subcategory
structure derived from the three-part identifier. For example, the table definition
tutorial.FACTS is a table definition imported from a UniVerse database table called
FACTS into the tutorial project using the localuv connection. It would have been
located in the category Table definitions\UniVerse\localuv. It's full identifier would
have been UniVerse\localuv\tutorial.FACTS.
With InfoSphere DataStage Release 8.0, the table definition can be located
anywhere in the repository that you choose. For example, you might want a top
level folder called Tutorial that contains all the jobs and table definitions concerned
with the server job tutorial.
Most stages allow you to selectively load columns, that is, specify the exact
columns you want to load.
Procedure
1. Click Load... . The Table Definitions dialog box appears. This window displays
the repository tree to enable you to browse for the required table definition.
2. Double-click the appropriate folder.
3. Continue to expand the folders until you see the table definition you want.
4. Select the table definition you want.
Note: You can use Quick Find to enter the name of the table definition you
want. The table definition is selected in the tree when you click OK.
5. Click OK. One of two things happens, depending on the type of stage you are
editing:
You can import definitions from a number of different data sources. Alternatively
you can define the column definitions manually.
You can import or enter table definitions from the Designer. For instructions, see
Chapter 4, “Defining your data,” on page 51.
Tip: When you browse for files in a chosen directory on the server, populating
the files can take a long time if there are many files present on the server
computer. For faster browsing, you can set the DS_OPTIMIZE_FILE_BROWSE
variable to true in the Administrator client. By default, this parameter is set to
false.
If you choose to browse, the Browse directories or Browse files dialog box appears.
v Look in. Displays the name of the current directory (or can be a drive if
browsing a Windows system). This has a drop down that shows where in the
directory hierarchy you are currently located.
v Directory/File list. Displays the directories and files on the chosen directory.
Double-click the file you want, or double-click a directory to move to it.
v File name. The name of the selected file. You can use wildcards here to browse
for files of a particular type.
v Files of type. Select a file type to limit the types of file that are displayed.
v Back button. Moves to the previous directory visited (is disabled if you have
visited no other directories).
v Up button. Takes you to the parent of the current directory.
v View button. Offers a choice of different ways of viewing the directory/file tree.
v OK button. Accepts the file in the File name field and closes the Browse files
dialog box.
v Cancel button. Closes the dialog box without specifying a file.
v Help button. Invokes the Help system.
You can paste them into another job canvas of the same type. This can be in the
same Designer, or another one, and you can paste them into different projects. You
can also use Paste Special to paste stages and links into a new shared container.
Note: Be careful when cutting from one context and pasting into another. For
example, if you cut columns from an input link and paste them onto an output
link they could carry information that is wrong for an output link and needs
editing.
To cut a stage, select it in the canvas and select Edit Cut (or press CTRL-X). To
copy a stage, select it in the canvas and select Edit Copy (or press CTRL-C). To
paste the stage, select the destination canvas and select Edit Paste (or press
CTRL-V). Any links attached to a stage will be cut and pasted too, complete with
metadata. If there are name conflicts with stages or links in the job into which you
are pasting, IBM InfoSphere DataStage will automatically update the names.
Pre-configured stages
There is a special feature that you can use to paste components into a shared
container and add the shared container to the palette.
To paste a stage into a new shared container, select Edit Paste Special Into new
Shared Container. The Paste Special into new Shared Container dialog box appears.
This allows you to select a folder and name for the new shared container, enter a
description and optionally add a shortcut to the palette.
If you want to cut or copy metadata along with the stages, you should select
source and destination stages, which will automatically select links and associated
metadata. These can then be cut or copied and pasted as a group.
Annotations
You can use annotations for a wide variety of purposes throughout your job
design. For example, you can use annotations to explain, summarize, or describe a
job design or to help identify parts of a job design.
There are two types of annotations that you can use in job designs:
Annotation
You enter this text yourself and you can add as many of this type of
annotation as required. Use it to annotate stages and links in your job
design. These annotations can be copied and pasted into other jobs.
Description Annotation
You can add only one of these types of annotations for each job design.
When you create a description annotation, you can choose whether the
Description Annotation displays the full description or the short
description from the job properties. Description Annotations cannot be
copied and pasted into other jobs. The job properties short or full
description remains in sync with the text you type for the Description
Annotation. Changes you make in the job properties description display in
the Description Annotation, and changes you make in the Description
Annotation display in the job properties description.
Annotations do not obstruct the display of the stages, links, or other components
in your job design.
Procedure
1. In the General section of the palette, click Description Annotation or
Annotation.
2. Click the area of the canvas where you want to insert the annotation. You can
resize the annotation box as required.
3. Double-click the annotation box or right-click the annotation box, and click
Properties.
4. In the Annotation Properties dialog box, type the annotation text.
What to do next
v You can use the Toggle annotations button in the toolbar to show or hide
annotations in the canvas.
v When you create a description annotation, you can choose whether the
Description Annotation displays the full description or the short description
from the job properties.
You can browse the data associated with the input or output links of any server
job built-in passive stage or with the links to certain parallel job stages
The Data Browser is invoked by clicking the View Data... button from a stage
Inputs or Outputs page, or by choosing the View link Data... option from the
shortcut menu.
For parallel job stages a supplementary dialog box lets you select a subset of data
to view by specifying the following:
v Rows to display. Specify the number of rows of data you want the data browser
to display.
v Skip count. Skip the specified number of rows before viewing data.
v Period. Display every Pth record where P is the period. You can start after
records have been skipped by using the Skip property. P must equal or be
greater than 1.
If your administrator has enabled the Generated OSH Visible option in the IBM
InfoSphere DataStage Administrator, the supplementary dialog box also has a
Show OSH button. Click this to open a window showing the OSH that will be run
to generate the data view. It is intended for expert users.
The Data Browser uses the metadata defined for that link. If there is insufficient
data associated with a link to allow browsing, the View Data... button and shortcut
menu command used to invoke the Data Browser are disabled. If the Data Browser
requires you to input some parameters before it can determine what data to
display, the Job Run Options dialog box appears and collects the parameters (see
"The Job Run Options Dialog Box").
You can view a row containing a specific data item using the Find... button. The
Find dialog box will reposition the view to the row containing the data you are
interested in. The search is started from the current row.
The Display... button invokes the Column Display dialog box. This allows you to
simplify the data displayed by the Data Browser by choosing to hide some of the
columns. For server jobs, it also allows you to normalize multivalued data to
provide a 1NF view in the Data Browser.
This dialog box lists all the columns in the display, all of which are initially
selected. To hide a column, clear it.
For server jobs, the Normalize on drop-down list box allows you to select an
association or an unassociated multivalued column on which to normalize the
data. The default is Un-normalized, and choosing Un-normalized will display the
data in NF2 form with each row shown on a single line. Alternatively you can
select Un-Normalized (formatted), which displays multivalued rows split over
several lines.
In the example, the Data Browser would display all columns except STARTDATE.
The view would be normalized on the association PRICES.
When you turn it on and compile a job it displays information against each link in
the job. When you run the job, either through the InfoSphere DataStage Director or
the Designer, the link information is populated with statistics to show the number
of rows processed on the link and the speed at which they were processed. The
links change color as the job runs to show the progress of the job.
Procedure
1. With the job open and compiled in the Designer choose Diagram Show
performance statistics. Performance information appears against the links. If the
job has not yet been run, the figures will be empty.
2. Run the job (either from the Director or by clicking the Run button. Watch the
links change color as the job runs and the statistics populate with number of
rows and rows/sec.
Results
If you alter anything on the job design you will lose the statistical information
until the next time you compile the job.
The job currently in focus will run, provided it has been compiled and saved.
Parameters page
The Parameters page lists any parameters or parameter sets that have been defined
for the job.
If default values have been specified, these are displayed too. You can enter a
value in the Value column, edit the default, or accept the default as it is. Click Set
to Default to set a parameter to its default value, or click All to Default to set all
parameters to their default values. Click Property Help to display any help text
that has been defined for the selected parameter (this button is disabled if no help
has been defined). Click OK when you are satisfied with the values for the
parameters.
When setting a value for an environment variable, you can specify one of the
following special values:
v $ENV. Instructs IBM InfoSphere DataStage to use the current setting for the
environment variable.
v $PROJDEF. The current setting for the environment variable is retrieved and set
in the job's environment (so that value is used wherever in the job the
environment variable is used). If the value of that environment variable is
subsequently changed in the Administrator client, the job will pick up the new
value without the need for recompiling.
v $UNSET. Instructs InfoSphere DataStage to explicitly unset the environment
variable.
Limits page
The Limits page allows you to specify whether stages in the job should be limited
in how many rows they process and whether run-time error warnings should be
ignored.
General page
Use the General page to specify that the job should generate operational metadata.
You can also disable any message handlers that have been specified for this job
run.
You can save the details of these connections as data connection objects. Data
connection objects store the information needed to connect to a particular database
in a reusable form. See Connecting to data sources.
You use data connection objects together with related connector stages to define a
connection to a data source in a job design. You can also use data connection
objects when you import metadata.
If you change the details of a data connection when designing a job, these changes
are reflected in the job design. When you compile your job, however, the data
connection details are fixed in the executable version of the job. Subsequent
changes to the job design will once again link to the data connection object.
Procedure
1. Choose File > New to open the New dialog box.
2. Open the Other folder, select the Data Connection icon, and click OK.
3. The Data Connection dialog box appears.
Enter the required details on each of the pages as detailed in the following
sections.
The Data Connection name must start with an alphabetic character and comprise
alphanumeric and underscore characters. The maximum length is 255 characters.
All data connection objects are associated with a particular type of stage. The
parameters associated with that stage are displayed and you can choose whether to
supply values for them as part of the data connection object. You can choose to
leave out some parameters; for example, you might not want to specify a
password as part of the object. In this case you can specify the missing property
when you design a job using this object.
You can create data connection objects associated with the following types of stage:
v Connector stages. You can create data connection objects associated with any of
the connector stages.
v Parallel job stages. You can create data connection objects associated with any of
the following types of parallel job stages:
– DB2/UDB Enterprise stage
– Oracle Enterprise stage
– Informix® Enterprise stage
– Teradata Enterprise stage
v Server job stages. You can create data connection objects associated with any of
the following types of server job stages:
– ODBC stage
– UniData stage
– Universe stage
v Supplementary stages. You can create data connection objects associated with
any of the following types of supplementary stages:
– DRS stage
– DB2/UDB API stage
– Informix CLI stage
– MS OLEDB stage
– Oracle OCI 9i stage
– Sybase OC stage
– Teradata API stage
Procedure
1. Choose the type of stage that the object relates to in the Connect using Stage
Type field by clicking the browse button and selecting the stage type object
from the repository tree. The Connection parameters list is populated with the
parameters that the connector stage requires in order to make a connection.
2. For each of the parameters, choose whether you are going to supply a value as
part of the object, and if so, supply that value.
Results
You can also specify that the values for the parameters will be supplied via a
parameter set object. To specify a parameter set in this way, step 2 is as follows:
v Click the arrow button next to the Parameter set field, then choose Create from
the menu. The Parameter Set window opens.
For more details about parameter set objects, see Chapter 5, “Making your jobs
adaptable,” on page 89.
You can choose to do this when performing the following types of metadata
import:
v Import via connectors
v Import via supplementary stages (Import > Table Definitions > Plug-in
Metadata Definitions)
v Import via ODBC definitions (Import > Table Definitions > ODBC Table
Definitions)
v Import from UniVerse table (Import > Table Definitions > UniVerse Table
Definitions)
v Import from Orchestrate® Schema (Import > Table Definitions > Orchestrate
Schema Definitions)
In some of these cases IBM InfoSphere DataStage provides a wizard to guide you
through the import, in others the import is via a simple dialog box. The method
for creating a data connection object from the import varies according to the
import type.
In order to share imported metadata with other components, you must specify a
data connection. You can do this either by loading one that was created earlier, or
saving the current connection details as a new data connection. If you continue the
import without having specified a data connection, imported metadata may not be
visible or usable outside of IBM InfoSphere DataStage.
Procedure
1. On the Connections page of the wizard (where you specify information such as
user name and password), click the Save link. The Data Connection dialog box
appears with connection details already filled in.
2. Fill in the remaining details such as name and description, and folder to store
the object in.
3. Click OK to close the Data Connection dialog box, and continue with the
import wizard.
Procedure
1. After you have made a connection to the data source, the dialog box expands
to list all the tables that are candidates for import. Click the Save Connection
button. The Data Connection dialog box appears with connection details
already filled in.
2. Fill in the remaining details such as name and description, and folder to store
the object in.
3. Click OK to close the Data Connection dialog box, and continue with the
import dialog box.
If you choose to import from a database, you can save the connection details you
use to a data connection object by using the following procedure.
Procedure
1. Supply the required database connection details and click the Save Data
Connection button. The Data Connection dialog box appears with connection
details already filled in.
2. Fill in the remaining details such as name and description, and folder to store
the object in.
3. Click OK to close the Data Connection dialog box, and continue with the
import wizard.
Procedure
1. Specify the required connection details in the connector stage editor (required
details vary with type of connector).
2. Click the Save button. The Data Connection dialog box appears with
connection details already filled in.
3. Fill in the remaining details such as name and description, and folder to store
the object in.
4. Click OK to close the Data Connection dialog box, and continue with your job
design.
Procedure
1. Fill in the required connection details in the stage editor and then close it.
2. Select the stage icon on the job design canvas.
3. Right click and choose Save Data Connection from the shortcut menu. The
Data Connection dialog box appears with connection details already filled in.
4. Fill in the remaining details such as name and description, and folder to store
the object in.
5. Click OK to close the Data Connection dialog box, and continue with your job
design.
or:
6. Fill in the required connection details in the stage editor.
7. Go to the Stage page of the stage editor.
8. Click the arrow next to the Data Connection field and choose Save. The Data
Connection dialog box appears with connection details already filled in.
9. Fill in the remaining details such as name and description, and folder to store
the object in.
10. Click OK to close the Data Connection dialog box, and continue with your job
design.
Where you have previously imported a table definition using a data connection
object, you can use the table definition as follows:
v Drag table definition object to canvas to create a stage of the associated kind
with connection details filled in together with link carrying the table definition.
v Drag table definition to an existing link on the canvas. The Designer client will
ask if you want to use the connection details for the associated stage.
Procedure
1. In the connector stage editor click Load Connection.
2. Choose from the data connection objects that you have currently defined for
this type of connector stage.
3. Click OK. The data connection details are loaded.
Results
If the data connection object only supplies some of the connection properties
required you will have to supply the rest yourself. For example, the password
might be deliberately left out of the data connection object, in which case you can
supply it in the job design or specify a job parameter and specify it at run time.
Procedure
Choose one of the following two procedures to use a data connection object to
supply connection details in a stage other than a connector stage.
v Option 1:
1. Select the stage icon on the job design canvas.
2. Right click and choose Load Data Connection from the shortcut menu.
3. Choose from the data connection objects that you have currently defined for
this type of stage.
4. Click OK. The data connection details are loaded.
v Option 2:
1. Open the stage editor.
2. Click the arrow next to the Data Connection box on the Stage page and
choose Load from the menu. A browse dialog box opens showing the
repository tree.
3. Choose the data connection object you want to use and click Open. The
name of the data connection object appears in the Data Connection field and
the connection information for the stage is filled in from the object.
Results
If the data connection object only supplies some of the connection properties
required you will have to supply the rest yourself. For example, the password
might be deliberately left out of the data connection object, in which case you can
supply it in the job design or specify a job parameter and specify it at run time.
When you import via a connector, a wizard guides you through the process. The
wizard is slightly different depending on which connector you are using for the
import.
Do the following steps to use a data connection object during the import.
Procedure
1. On the Connections page of the wizard (where you specify information such as
user name and password), click the Load link.
2. Choose from the data connection objects that you have currently defined for
this type of connector stage.
3. Click OK to proceed with the metadata import.
You can also import metadata directly from a data connection object.
Procedure
1. Select the data connection object in the repository tree.
2. Right-click and select Import metadata from the shortcut menu.
Results
The appropriate imported wizard opens and guides you through the process of
importing metadata.
You define the data by importing or defining table definitions. You can save the
table definitions for use in your job designs.
Table definitions are the key to your IBM InfoSphere DataStage project and specify
the data to be used at each stage of a job. Table definitions are stored in the
repository and are shared by all the jobs in a project. You need, as a minimum,
table definitions for each data source and one for each data target in the data
warehouse.
When you develop a job you will typically load your stages with column
definitions from table definitions held in the repository. You do this on the relevant
Columns tab of the stage editor. If you select the options in the Grid Properties
dialog box, the Columns tab will also display two extra fields: Table Definition
Reference and Column Definition Reference. These show the table definition and
individual columns that the columns on the tab were derived from.
You can import, create, or edit a table definition using the Designer.
General page
The General page contains general information about the table definition.
The combination of the data source type, data source name, and table or file name
forms a unique identifier for the table definition. The entire identifier is shown at
the top of the General page. No two table definitions can have the same identifier.
The table definition can be located anywhere in the repository that you choose. For
example, you might want a top level folder called Tutorial that contains all the jobs
and table definitions concerned with the server job tutorial.
Columns page
The Columns page contains a grid displaying the column definitions for each
column in the table definition.
The following columns appear if you selected the Meta data supports
Multi-valued fields check box on the General page:
The following columns appear if the table definition is derived from a COBOL file
definition mainframe data source:
v Level number. The COBOL level number.
Mainframe table definitions also have the following columns, but due to space
considerations, these are not displayed on the columns page. To view them, choose
Edit Row... from the Columns page shortcut menu, the Edit Column Metadata
dialog appears, displaying the following field in the COBOL tab:
v Occurs. The COBOL OCCURS clause.
v Sign indicator. Indicates whether the column can be signed or not.
v Sign option. If the column is signed, gives the location of the sign in the data.
v Sync indicator. Indicates whether this is a COBOL-synchronized clause or not.
v Usage. The COBOL USAGE clause.
v Redefined field. The COBOL REDEFINED clause.
v Depending on. A COBOL OCCURS-DEPENDING-ON clause.
v Storage length. Gives the storage length in bytes of the column as defined.
v Picture. The COBOL PICTURE clause.
The Columns page for each link also contains a Clear All and a Load... button.
The Clear All button deletes all the column definitions. The Load... button loads
(copies) the column definitions from a table definition elsewhere in the Repository.
A shortcut menu available in grids allows you to edit a cell, delete a row, or add a
row.
Format page
The Format page contains file format parameters for sequential files used in server
jobs.
These fields are automatically set when you import a table definition from a
sequential file.
The rest of this page contains five fields. The available fields depend on the
settings for the check boxes.
v Spaces between columns. Specifies the number of spaces used between the
columns in the file. This field appears when you select Fixed-width columns.
v Delimiter. Contains the delimiter that separates the data fields. By default this
field contains a comma. You can enter a single printable character or a decimal
or hexadecimal number to represent the ASCII code for the character you want
to use. Valid ASCII codes are in the range 1 to 253. Decimal values 1 through 9
must be preceded with a zero. Hexadecimal values must be prefixed with &h.
Enter 000 to suppress the delimiter
v Quote character. Contains the character used to enclose strings. By default this
field contains a double quotation mark. You can enter a single printable
character or a decimal or hexadecimal number to represent the ASCII code for
the character you want to use. Valid ASCII codes are in the range 1 to 253.
Decimal values 1 through 9 must be preceded with a zero. Hexadecimal values
must be prefixed with &h. Enter 000 to suppress the quote character.
v NULL string. Contains characters that are written to the file when a column
contains SQL null values.
v Padding character. Contains the character used to pad missing columns. This is
# by default.
The Sync Parallel button is only visible if your system supports parallel jobs. It
causes the properties set on the Parallel tab to mirror the properties set on this
page when the button is pressed. A dialog box appears asking you to confirm this
action, if you do the Parallel tab appears and lets you view the settings.
NLS page
If NLS is enabled, this page contains the name of the map to use for the table
definitions.
The map should match the character set used in the definitions. By default, the list
box shows all the maps that are loaded and ready to use with server jobs. Show
all Server maps lists all the maps that are shipped with IBM InfoSphere DataStage.
Show all Parallel maps lists the maps that are available for use with parallel jobs
Note: You cannot use a server map unless it is loaded into InfoSphere DataStage.
You can load different maps using the Administrator client.
Select Allow per-column mapping if you want to assign different character set
maps to individual columns.
Relationships page
The Relationships page shows you details of any relationships this table definition
has with other tables, and allows you to define new relationships.
Parallel page
This page is used when table definitions are used in parallel jobs and gives
detailed format information for the defined meta data.
The information given here is the same as on the Format tab in one of the
following parallel job stages:
v Sequential File Stage
v File Set Stage
v External Source Stage
v External Target Stage
v Column Import Stage
v Column Export Stage
The Defaults button gives access to a shortcut menu offering the choice of:
v Save current as default. Saves the settings you have made in this dialog box as
the default ones for your table definition.
v Reset defaults from factory settings. Resets to the defaults that IBM InfoSphere
DataStage came with.
v Set current from default. Set the current settings to the default (this could be
the factory default, or your own default if you have set one up).
Click the Show schema button to open a window showing how the current table
definition is generated into an OSH schema. This shows how InfoSphere DataStage
will interpret the column definitions and format properties of the table definition
in the context of a parallel job stage.
Layout page
The Layout page displays the schema format of the column definitions in a table.
Locator page
Use the Locator page to view and edit the data resource locator associated with the
table definition.
The labels and contents of the fields in this page vary according to the type of data
source/target the locator originates from.
If the import data connection details were saved in a data connection object when
the table definition was created, then the data connection object is identified by the
Data Connection field.
If the table definition is related to a shared table, the name of the shared table is
given in the Created from Data Collection field.
If the table definition is related to a shared table with a Name Alias, then the alias
is listed in the Name alias field.
A new table definition is created and the properties are automatically filled in with
the details of your data source or data target.
You can import table definitions from the following data sources:
v Assembler files
v COBOL files
v DCLGen files
v ODBC tables
v Orchestrate schema definitions
v PL/1 files
v Data sources accessed using certain connectivity stages.
v Sequential files
v Stored procedures
v UniData files
IBM InfoSphere DataStage connects to the specified data source and extracts the
required table definition metadata. You can use the Data Browser to view the
actual data in data sources from which you are importing table definitions.
Procedure
1. Choose Import > Table Definitions > Data Source Type from the main menu.
For most data source types, a dialog box appears enabling you to connect to
the data source (for some sources, a wizard appears and guides you through
the process).
2. Fill in the required connection details and click OK. Once a connection to the
data source has been made successfully, the updated dialog box gives details of
the table definitions available for import.
3. Select the required table definitions and click OK. The table definition metadata
is imported into the repository.
Results
The Data Browser can be used when importing table definitions from the following
sources:
v ODBC table
v UniVerse table
v Hashed (UniVerse) file
v Sequential file
v UniData file
v Some types of supplementary stages
The Data Browser is opened by clicking the View Data... button on the Import
Metadata dialog box. The Data Browser window appears.
The Data Browser uses the metadata defined in the data source. If there is no data,
a Data source is empty message appears instead of the Data Browser.
The Display... button opens the Column Display dialog box. It allows you to
simplify the data displayed by the Data Browser by choosing to hide some of the
columns. It also allows you to normalize multivalued data to provide a 1NF view
in the Data Browser.
This dialog box lists all the columns in the display, and initially these are all
selected. To hide a column, clear it.
You can also make table definitions in the IBM InfoSphere DataStage repository
available to other suite components.
Shared metadata
You can share metadata between the local project repository and the suite-wide
shared repository.
When you are working in a project repository, the metadata that is displayed in the
repository tree is local to that project. The metadata cannot be used by another
project or another suite component unless you make the metadata available as a
table in the shared repository.
You can manage shared metadata using a tool in the Designer client. The shared
metadata is stored in a hierarchy of objects that reflect the data sources from which
the metadata was derived. The hierarchy has one of the following structures:
Ensure that the Share metadata when importing from Connectors option is set for
the current project in the Administrator client. This option is selected by default. If
this option is not set, only the table definition in the project repository is created
when you import metadata by using a connector. You can subsequently associate a
table definition with a table in the shared repository by using the shared metadata
feature.
The metadata is imported from the external data source. A table is created in the
shared repository, and a table definition is created in your project repository tree.
Procedure
1. From the Designer client, open the Import Connector Metadata wizard.
2. On the Connector selection page, select the connector for the import process.
The connector that you want depends on the type of data source that you are
importing the metadata from.
3. On the Connection details page, enter the connection details for the data
source, and click Next. The next pages collect information that is specific to the
type of connector that you are using for the import process.
4. Specify the details for the type of connector that you selected, and click Next.
5. On the Data source location page, select the host name and database to identify
where you want to store the metadata in the shared repository. If the lists are
not populated, click New location to start the Shared Metadata Management
tool so that you can create host and database objects in the repository that
correspond to the data source that you are importing metadata from.
6. Click Next.
7. Confirm the import details and click Import.
8. Browse the repository tree and select the location in the project repository for
the table definition that you are creating, and then click OK.
Procedure
v Select Import > Table Definitions > Start connector import wizard from the
main menu.
v Select Import Table Definition > Start connector import wizard from the
repository tree shortcut menu.
v From a stage editor Columns tab, click Load, and then select Import Table
Definition > Start connector import wizard from the repository tree shortcut
menu in the Table Definitions window.
The table definitions that are created from metadata in the shared repository have
a different icon from the table definitions that have been imported or created
locally in the project. Information about the source of the table definition is shown
in the Locator page of the Table Definition window.
Table definitions that are linked to tables in the shared repository are identified by
the following icon:
Table definitions that are local to the project are identified by the following icon:
Procedure
1. In the Designer client, select Repository > Metadata Sharing > Create Table
Definition from Table from the main menu.
2. Browse the tree in the Create Table Definition from Table window and select
the tables from which you want to build table definitions in your project. You
can select individual tables or select a schema, database, or host that is higher
in the tree to select all the contained tables.
3. In the Folder in which to create Table Definitions field, specify the folder in
your project repository where you want to store the table definitions.
4. Click Create.
Procedure
1. In the Designer client, do one of the following:
v Select the table definition that you want to share, right-click, and select
Shared Table Creation Wizard.
v Select Repository > Metadata sharing > Shared Table Creation Wizard.
2. In the Select Table Definitions page of the Shared Table Creation wizard, click
Add to open a browse window.
3. In the browse window, select one or more table definitions that you want to
create tables for (if you opened the wizard by selecting a table definition and
You can create tables only from table definitions that do not have an existing link
with a shared table. Table definitions must have a valid locator. The locator
describes the real-world object that the table definition was derived from. A locator
is created automatically when you import the table definition from a data source,
or you can specify a locator in the Table Definition Properties window.
Procedure
1. In the Designer client, do one of the following:
v Select the table definition that you want to share, right-click, and select
Shared Table Creation Wizard.
v Select Repository > Metadata sharing > Shared Table Creation Wizard.
2. In the Select Table Definitions page of the Shared Table Creation wizard, click
Add to open a browse window.
3. In the browse window, select one or more table definitions that you want to
create tables for (if you opened the wizard by selecting a table definition and
right-clicking, then that table definition is already listed). You can select a table
definition in the list and click View properties to view the properties of the
selected table definition.
4. Click OK to close the browse window and add the selected table definition to
the wizard page.
5. When you have selected all the table definitions that you want to share, click
Next. The wizard searches the tables in the shared repository and determines if
any of them match the table definitions that you specified. It links tables to
table definitions where they match. The wizard displays the results in the
Create or Associate Tables page. If no automatic link has been made for a table
definition, you have three choices:
v Create a new table in the shared repository.
v Create a link to an existing table in the shared repository.
v Use the wizard to review ratings for tables that might match the table
definitions.
To create a new table in the shared repository for a table definition:
a. Click the Association to Shared Table column.
b. Select Create New from the menu options.
c. In the Create New Table window, select the Host, Database, and Schema
details for the new table from the lists. The table name is derived from the
table definition name, but you can edit the name if required.
d. Click OK. If the wizard detects that a table with those details already exists,
it asks you if you want to link to that table, or change the details and create
a new table. Otherwise, the path name of the shared table appears in the
Association to Shared Table column, and New is shown in the Action
column.
To manually create a link to an existing table in the shared repository:
a. Click on the Association to Shared Table column.
b. Select Browse existing from the menu options.
c. In the Browse for shared table window, browse the tree structure to locate
the table that you want to link the table definition to.
d. Click OK. The path name of the shared table is shown in the Association to
Shared Table column, and Linking is shown in the Action column.
Synchronizing metadata
You can check that the table definition in your project repository is synchronized
with the table in the shared repository. You can check the synchronization state
manually to ensure that no changes have occurred since the last repository refresh.
A table definition is in the synchronized state when its modification time and date
match the modification time and date of the table in the shared repository to
which it is linked.
Procedure
1. Select one or more table definitions in the project repository tree.
2. Select Repository > Metadata Sharing > Update table definition from shared
table from the main menu.
3. If any of the table definitions are not synchronized with the tables, you can do
one of the following actions for that table definition. You can perform these
actions on multiple tables if required:
v Click Update or Update All to update the table definition or table definitions
from the table or tables.
v Click Remove or Remove All to remove the link between the table definition
or table definitions and the table or tables.
4. If the table definitions are synchronized with the tables, you can either close the
window or click Remove to remove the link between the table definition and
the table.
Use the Shared Metadata Management tool to add a new host system, add a new
database or data file, or add a new schema. You can also use the tool to delete
items from the shared repository.
You can open the quick find tool from the Shared Metadata Management tool to
search for objects in the shared repository.
The new host system object is shown in the tree in the Shared Metadata
Management tool. The details that you enter are shown in the right pane of the
Shared Metadata Management tool whenever this host system object is selected in
the tree.
Procedure
1. Select Repository > Metadata Sharing > Management from the main menu to
open the Shared Metadata Management tool.
2. Click the repository icon at the top of the tree.
3. Select Add > Add new host system.
4. In the Add new host system window, specify information about your host
system. The Name field and Network Node fields are mandatory; the other
fields are optional.
5. Click OK.
The new database object is shown in the tree in the Shared Metadata Management
tool. The details that you enter are shown in the right pane of the Shared Metadata
Management tool whenever this database object is selected in the tree. Click the
Columns tab to view the table columns.
Procedure
1. Select Repository > Metadata Sharing > Management from the main menu to
open the Shared Metadata Management tool.
2. Select the host system where you want to add a database.
3. Select Add > Add new database.
4. In the Add new database window, specify information about your database.
The Name field is mandatory; the other fields are optional.
5. Click OK.
The new schema object is shown in the tree in the Shared Metadata Management
tool. The details that you enter are shown in the right pane of the Shared Metadata
Management tool whenever this schema object is selected in the tree.
Procedure
1. Select Repository > Metadata Sharing > Management from the main menu to
open the Shared Metadata Management tool.
2. Select the database where you want to add a schema.
3. Select Add > Add new schema.
4. In the Add new schema window, specify information about your schema. The
Name field is mandatory, the other fields are optional.
5. Click OK.
The new data file object is shown in the tree in the Shared Metadata Management
tool. The details that you enter are shown in the right pane of the Shared Metadata
Management tool whenever this object is selected in the tree. Click the Columns
tab to view the data columns in the file.
Procedure
1. Select Repository > Metadata Sharing > Management from the main menu to
open the Shared Metadata Management tool.
2. Select the host system where you want to add a data file.
3. Select Add > Add new data file.
4. In the Add new data file window, specify information about your data file. The
Name field is mandatory, the other fields are optional.
5. Click OK.
To manually enter table definition properties, you must first create a new table
definition. You can then enter suitable settings for the general properties before
specifying the column definitions. You only need to specify file format settings for
a sequential file table definition.
Procedure
1. Enter the type of data source in the Data source type field. The name entered
here determines how the definition appears under the Table Definitions
branch.
2. Enter the name of the data source in the Data source name field. This forms
the second part of the table definition identifier and is the name of the branch
created under the data source type branch.
3. Enter the name of the table or file containing the data in the Table name field.
This is the last part of the table definition identifier and is the name of the leaf
created under the data source branch. The rules for name table definitions are
as follows:
v Table names can be any length.
v They must begin with an alphabetic character.
v They can contain alphanumeric, period, and underscore characters.
4. Where the Data source type specifies a relational database, type the name of
the database owner in Owner.
5. If you are entering a mainframe table definition, choose the platform type
from the Mainframe platform type drop-down list, and the access type from
the Mainframe access type drop-down list. Otherwise leave both of these
items set to <Not applicable>.
6. Select the Metadata supports Multi-valued fields check box if the metadata
supports multivalued data.
7. If required, specify what character an ODBC data source uses as a quote
character in ODBC quote character.
8. Enter a brief description of the data in the Short description field. This is an
optional field.
9. Enter a more detailed description of the data in the Long description field.
This is an optional field.
10. Click the Columns tab. The Columns page appears at the front of the Table
Definition dialog box. You can now enter or load column definitions for your
data.
Procedure
1. Do one of the following:
Note: Mainframe jobs are not supported in this version of IBM InfoSphere
Information Server.
Field Level
String Type
Date Type
Time Type
Timestamp Type
Integer Type
Decimal Type
Float Type
Nullable
Generator
If the column is being used in a Row Generator or Column Generator stage, this
allows you to specify extra details about the mock data being generated. The exact
fields that appear depend on the data type of the column being generated. They
allow you to specify features of the data being generated, for example, for integers
they allow you to specify if values are random or whether they cycle. If they cycle
you can specify an initial value, an increment, and a limit. If they are random, you
can specify a seed value for the random number generator, whether to include
negative numbers, and a limit.
Vectors
If the row you are editing represents a column which is a variable length vector,
tick the Variable check box. The Vector properties appear, these give the size of the
vector in one of two ways:
If the row you are editing represents a column which is a vector of known length,
enter the number of elements in the Vector Occurs box.
Subrecords
If the row you are editing represents a column which is part of a subrecord the
Level Number column indicates the level of the column within the subrecord
structure.
If you specify Level numbers for columns, the column immediately preceding will
be identified as a subrecord. Subrecords can be nested, so can contain further
subrecords with higher level numbers (that is, level 06 is nested within level 05).
Subrecord fields have a Tagged check box to indicate that this is a tagged
subrecord.
Extended
For certain data types the Extended check box appears to allow you to modify the
data type as follows:
v Char, VarChar, LongVarChar. Select to specify that the underlying data type is a
ustring.
v Time. Select to indicate that the time field includes microseconds.
v Timestamp. Select to indicate that the timestamp field includes microseconds.
v TinyInt, SmallInt, Integer, BigInt types. Select to indicate that the underlying
data type is the equivalent uint field.
Use the buttons at the bottom of the Edit Column Metadata dialog box to continue
adding or editing columns, or to save and close. The buttons are:
v Previous and Next. View the metadata in the previous or next row. These
buttons are enabled only where there is a previous or next row enabled. If there
are outstanding changes to the current row, you are asked whether you want to
save them before moving on.
v Close. Close the Edit Column Metadata dialog box. If there are outstanding
changes to the current row, you are asked whether you want to save them
before closing.
v Apply. Save changes to the current row.
v Reset. Remove all changes made to the row since the last time you applied
changes.
Click OK to save the column definitions and close the Edit Column Metadata
dialog box.
Remember, you can also edit a columns definition grid using the general grid
editing controls .
Note: You can click Open quick find to enter the name of the table definition
you want. The table definition is automatically highlighted in the tree when
you click OK. You can use the Import button to import a table definition from
a data source.
2. When you locate the table definition whose contents you want to copy, select it
and click OK. The Select Columns dialog box appears. It allows you to specify
which column definitions from the table definition you want to load.
Use the arrow keys to move columns back and forth between the Available
columns list and the Selected columns list. The single arrow buttons move
highlighted columns, the double arrow buttons move all items. By default all
columns are selected for loading. Click Find... to open a dialog box which lets
you search for a particular column. The shortcut menu also gives access to
Find... and Find Next. Click OK when you are happy with your selection. This
closes the Select Columns dialog box and loads the selected columns into the
stage.
For mainframe stages and certain parallel stages where the column definitions
derive from a CFD file, the Select Columns dialog box might also contain a
Create Filler check box. This happens when the table definition the columns
are being loaded from represents a fixed-width table. Select this to cause
sequences of unselected columns to be collapsed into filler items. Filler columns
are sized appropriately, their data type set to character, and name set to
FILLER_XX_YY where XX is the start offset and YY the end offset. Using fillers
results in a smaller set of columns, saving space and processing time and
making the column set easier to understand.
If you are importing column definitions that have been derived from a CFD file
into server or parallel job stages, you are warned if any of the selected columns
redefine other selected columns. You can choose to carry on with the load or go
back and select columns again.
3. Save the table definition by clicking OK.
Results
You can edit the table definition to remove unwanted column definitions, assign
data elements, or change branch names.
Note: Certain stages in server and parallel jobs do not accept particular characters
at run time, even though you can specify them in the IBM InfoSphere DataStage
and QualityStage® Designer client.
To view a table definition, select it in the repository tree and do one of the
following:
v Choose Properties... from the shortcut menu.
v Double-click the table definition in the display area.
The Table Definition dialog box appears. You can edit any of the column definition
properties or delete unwanted definitions.
To edit a column definition in the grid, click the cell you want to change then
choose Edit cell... from the shortcut menu or press Ctrl-E to open the Edit Column
Metadata dialog box.
Unwanted column definitions can be easily removed from the Columns grid. To
delete a column definition, click any cell in the row you want to remove and press
the Delete key or choose Delete row from the shortcut menu. Click OK to save
any changes and to close the Table Definition dialog box.
To delete several column definitions at once, hold down the Ctrl key and click in
the row selector column for the rows you want to remove. Press the Delete key or
choose Delete row from the shortcut menu to remove the selected rows.
To find a particular column definition, choose Find row... from the shortcut menu.
The Find dialog box appears, allowing you to enter a string to be searched for in
the specified column.
Propagating values
You can propagate the values for the properties set in a column to several other
columns.
Select the column whose values you want to propagate, then hold down shift and
select the columns you want to propagate to. Choose Propagate values... from the
shortcut menu to open the dialog box.
In the Property column, click the check box for the property or properties whose
values you want to propagate. The Usage field tells you if a particular property is
applicable to certain types of job only (for example server, mainframe, or parallel)
or certain types of table definition (for example COBOL). The Value field shows
the value that will be propagated for a particular property.
To do so, you use an ODBC stage in a server job, or the STP stage in a server or
parallel job (the STP stage has its own documentation, which is available when
you install the stage).
Note: ODBC stages support the use of stored procedures with or without input
arguments and the creation of a result set, but do not support output arguments
or return values. In this case a stored procedure might have a return value
defined, but it is ignored at run time. A stored procedure might not have output
parameters.
The definition for a stored procedure (including the associated parameters and
metadata) can be stored in the Repository. These stored procedure definitions can
be used when you edit an ODBC stage or STP stage in your job design.
You can import, create, or edit a stored procedure definition using the Designer.
Results
The dialog box for stored procedures has additional pages, having up to six pages
in all:
v General. Contains general information about the stored procedure. The Data
source type field on this page must contain StoredProcedures to display the
additional Parameters page.
v Columns. Contains a grid displaying the column definitions for each column in
the stored procedure result set. You can add new column definitions, delete
unwanted definitions, or edit existing ones. For more information about editing
a grid, see Editing Column Definitions.
v Parameters. Contains a grid displaying the properties of each input parameter.
Note: If you cannot see the Parameters page, you must enter StoredProcedures
in the Data source type field on the General page.
The grid has the following columns:
– Column name. The name of the parameter column.
– Key. Indicates whether the column is part of the primary key.
– SQL type. The SQL data type.
– Extended. This column gives you further control over data types used in
parallel jobs when NLS is enabled. Selecting a value from the extended
drop-down list is the equivalent to selecting the Extended option in the Edit
Column Metadata dialog box Parallel tab. The available values depend on the
base data type
– I/O Type. Specifies the type of parameter. Can be one of IN, INOUT, OUT, or
RETURN. Note that the ODBC stage only supports IN and INOUT
parameters. The STP stage supports all parameter types.
– Length. The data precision. This is the length for CHAR data and the
maximum length for VARCHAR data.
– Scale. The data scale factor.
– Nullable. Specifies whether the column can contain null values. This is set to
indicate whether the column is subject to a NOT NULL constraint. It does not
itself enforce a NOT NULL constraint.
To manually enter a stored procedure definition, first create the definition. You can
then enter suitable settings for the general properties, before specifying definitions
for the columns in the result set and the input parameters.
Note: You do not need to edit the Format page for a stored procedure definition.
Procedure
1. Choose File > New to open the New dialog box.
2. Open the Other folder, select the Table definition icon, and click OK.
3. TheTable Definition dialog box appears. You must enter suitable details for each
page appropriate to the type of table definition you are creating. At a minimum
you must supply identification details on the General page and column
definitions on the Columns page. Details are given in the following sections.
The Columns page appears at the front of the Table Definition dialog box. You can
now enter or load column definitions. For more information, see "Entering Column
Definitions" and "Loading Column Definitions".
Note: You do not need a result set if the stored procedure is used for input
(writing to a database). However, in this case, you must have input parameters.
You can enter parameter definitions are entered directly in the Parameters grid
using the general grid controls or you can use the Edit Column Metadata dialog
box.
Procedure
1. Do one of the following:
v Right-click in the column area and choose Edit row... from the shortcut
menu.
v Press Ctrl-E.
The Edit Column Metadata dialog box appears. The Server tab is on top,
and only contains a Data Element and a Display field.
2. In the main page, specify the SQL data type by choosing an appropriate type
from the drop-down list in the SQL type cell.
3. Enter an appropriate value for the data precision in the Length cell.
4. Enter an appropriate data scale factor in the Scale cell.
5. Specify whether the parameter can contain null values by choosing an
appropriate option from the drop-down list in the Nullable cell.
6. Enter text to describe the column in the Description cell. This cell expands to
a drop-down text entry box if you enter more characters than the display
width of the column. You can increase the display width of the column if you
want to see the full text description.
7. In the Server tab, enter the maximum number of characters required to
display the parameter data in the Display cell.
8. In the Server tab, choose the type of data the column contains from the
drop-down list in the Data element cell. This list contains all the built-in data
elements supplied with IBM InfoSphere DataStage and any additional data
elements you have defined. You do not need to edit this cell to create a
column definition. You can assign a data element at any point during the
development of your job.
9. Click APPLY and CLOSE to save and close the Edit Column Metadata dialog
box.
10. You can continue to add more parameter definitions by editing the last row in
the grid. New parameters are always added to the bottom of the grid, but you
can select and drag the row to a new position in the grid.
Procedure
1. Enter the raiserror values that you want to be regarded as a fatal error. The
values should be separated by a space.
2. Enter the raiserror values that you want to be regarded as a warning. The
values should be separated by a space.
To view a stored procedure definition, select it in the repository tree and do one of
the following:
v Choose Properties... from the shortcut menu.
v Double-click the stored procedure definition in the display area.
The Table Definition dialog box appears. You can edit or delete any of the column
or parameter definitions.
To edit a definition, click the cell you want to change. The way you edit the cell
depends on the cell contents. If the cell contains a drop-down list, choose an
alternative option from the drop-down list. If the cell contains text, you can start
typing to change the value, or press F2 or choose Edit cell... from the shortcut
menu to put the cell into edit mode. Alternatively you can edit rows using the Edit
Column Meta Data dialog box.
To delete several column or parameter definitions at once, hold down the Ctrl key
and click in the row selector column for the rows you want to remove. Press the
Delete key or choose Delete row from the shortcut menu to remove the selected
rows.
Parameters
Use job parameters to design flexible, reusable jobs. If you want to process data
based on the results for a particular week, location, or product, you can include
these settings as part of your job design. However, when you want to use the job
again for a different week or product, you must edit the design and recompile the
job.
Instead of entering variable factors as part of the job design, you can create
parameters that represent processing variables. When you run the job, you are
prompted to select values for each of the parameters that you define. For
mainframe jobs, the parameter values are placed in a file that is accessed when the
job is compiled and run on the mainframe.
You can supply default values for parameters, which are used unless another value
is specified when the job runs. For most parameter types, you enter a default value
into the Default Value cell. When entering a password or a list variable,
double-click the Default Value cell to open further dialog boxes to supply default
values.
Parameter sets
You can specify job parameters on a per-job basis by using the Parameters page of
the Job Properties window. For parallel jobs, server jobs, and sequences jobs, you
can also create parameter sets and store them in the repository. Use parameter sets
to define job parameters that you are likely to reuse in different jobs, such as
connection details for a particular database. Then, when you need this set of
parameters in a job design, you can insert them into the job properties from the
parameter set. You can also define different sets of values for each parameter set.
These are stored as files in the IBM InfoSphere DataStage server install directory,
and are available for you to use when you run jobs that use these parameter sets.
If you make any changes to a parameter set, these changes are reflected in job
designs that use this object up until the time the job is compiled. The parameters
that a job is compiled with are the ones that will be available when the job is run
(although if you change the design after compilation the job will once again link to
the current version of the parameter set).
Environment variables
Procedure
1. Open the job that you want to define parameters for.
2. Click Edit > Job Properties to open the Job Properties window.
3. Click the Parameters tab.
4. Enter the following information for the parameter that you are creating. Each
parameter represents a source file or a directory.
Parameter name
The name of the parameter.
Prompt
The text that displays for this parameter when you run the job.
Type
The type of parameter that you are creating, which can be one of the
following values:
Results
Your parameter is added to your job. For stages that accept job properties as input,
such as the Sequential File stage, you can use the job parameter as input.
Combining similar parameters into a parameter set simplifies the task of running a
job, and makes parameters easier to manage.
Procedure
1. Open the job that you want to create a parameter set for.
2. Click Edit > Job Properties to open the Job Properties window.
3. Click the Parameters tab.
4. Press and hold the Ctrl key, then select the parameters that you want to include
in the parameter set.
5. With your parameters highlighted, click Create Parameter Set. The Parameter
Set window opens.
a. Enter a name and short description for your parameter set.
b. Click the Parameters tab. All of the parameters that you selected are listed.
c. Click the Values tab.
d. Enter a name in the Value File name field, then press Enter. The value for
each of your parameters is automatically populated with the path name that
you entered.
e. If a default value is not already set, enter a value for each parameter. For
example, if the variable is a Pathname type, enter a default path name.
f. Click OK to close the Parameter Set window.
g. In the Save Parameter Set As window, select the folder where you want to
save your parameter set and click Save. When prompted to replace the
selected parameters with the parameter set, click Yes.
6. Click OK to close the Job Properties window.
Results
Your parameter set is created. If you need to modify any parameters in your
parameter set, expand the folder where you saved your parameter set, then
double-click your parameter set to open it in a new window. From this window,
Procedure
1. Open the job that you want to define environment variables for.
2. Press Ctrl + J to open the Job Properties window.
3. Click the Parameters tab.
4. In the lower right of the Parameters page, click Add Environment Variable.
The Choose environment variable window opens to display a list of the
available environment variables.
Option Description
To create a new environment variable 1. Click New. The Create new
environment variable window opens.
2. Enter a name and the prompt that you
want to display at run time, then click
OK.
3. In the list, click the environment variable
that you created.
To use an existing environment variable Click on the environment variable that you
want to override at runtime.
What to do next
When you run the job and specify a value for the environment variable, you can
optionally specify one of the following special values:
$ENV
Use the current setting for the environment variable.
$PROJDEF
Retrieve the current setting for the environment variable, and set it in the job
environment. This value is then used in the job wherever the environment
variable is used. If the value of the environment variable is changed in the
Administrator client, the job retrieves the new value without recompiling.
$UNSET
Unset the environment variable.
Procedure
1. Open the job that you want to add a parameter set to.
2. Click Edit > Job Properties to open the Job Properties window.
3. Click the Parameters tab.
4. Click Add Parameter Set.
A window opens that lists all parameter sets for the current project.
5. Expand the folder that contains the parameter set that you want to add.
6. Click the parameter set, then click OK.
The parameter set is listed in the Parameters page of the Job Properties
window.
7. Click OK to close the Job Properties window.
After you add parameters and parameter sets to your job, you insert them into
properties for various stages. Properties that you can substitute a job parameter for
have an insert icon next to the property value field.
If you delete a parameter, ensure that you remove the references to the parameter
from your job design. If you do not remove the references, your job might fail.
Procedure
1. Open the stage that you want edit. For example, a Sequential File stage.
2. Click the property that you want to insert a parameter for. For example, click
the File property in a Sequential File stage.
3. To the right of the property, click the insert icon ( ), then click Insert job
parameter.
4. Select the parameter that you want to use, then press the Enter key.
The parameter is displayed in the property field, delimited by number signs
(#). For example, #Parameter_name#.
If you add a parameter that is included in a parameter set, the parameter set
name precedes the name of the parameter. For example,
#Parameter_set_name.Parameter_name#.
5. Click OK to close the stage editor.
Use the Parameters page of the sequence job Properties to add parameters and
parameter sets to your sequence job. This procedure is the same as that for an
ordinary job.
After you add a parameter set to the sequence job, you map the values to the job
activity.
Procedure
1. Open the activity that you want to modify the parameter values for.
2. In the Parameters section, select the parameter that you want to modify the
value of.
3. Click Insert Parameter.
The External Parameter Helper window opens.
4. Select the parameter set that you want to associate with the parameter, then
click OK.
5. Click OK to close the Job Activity window.
A container is a group of stages and links. Containers enable you to simplify and
modularize your job designs by replacing complex areas of the diagram with a
single container stage.
In a server job, all of columns that are supplied by a local or a shared container
stage must be used by the stage that follows the container in the job.
In a parallel job, all of columns that are supplied by a server shared container
stage must be used by the stage that follows the container in the job.
In a parallel job, a subset of the columns that are supplied by a parallel shared
container or a local container can be used by the stage that follows the container in
the job.
Local containers
The main purpose of using a local container is to simplify a complex design
visually to make it easier to understand in the Diagram window.
If the job has lots of stages and links, it might be easier to create additional
containers to describe a particular sequence of steps. Containers are linked to other
stages or containers in the job by input and output stages.
You can create a local container from scratch, or place a set of existing stages and
links within a container. A local container is only accessible to the job in which it is
created.
Click the Container icon in the General group on the tool palette and click on the
Diagram window, or drag it onto the Diagram window. A Container stage is added
to the Diagram window, double-click on the stage to open it, and add stages and
links to the container.
Results
You can rename, move, and delete a container stage in the same way as any other
stage in your job design.
You can edit the stages and links in a container in the same way you do for a job.
See Using Input and Output Stages for details on how to link the container to
other stages in the job.
In the container itself, you cannot have a link hanging in mid-air, so input and
output stages are used to represent the stages in the main job to which the
container connects.
You can have any number of links into and out of a local container, all of the link
names inside the container must match the link names into and out of it in the job.
Once a connection is made, editing meta data on either side of the container edits
the metadata on the connected stage in the job.
You can do this regardless of whether you created it from a group in the first
place. To deconstruct a local container, do one of the following:
v Select the container stage in the Job Diagram window and click Deconstruct
from the shortcut menu.
v Select the container stage in the Job Diagram window and click Edit >
Deconstruct Container on the main menu.
IBM InfoSphere DataStage prompts you to confirm the action (you can disable this
prompt if required). Click OK and the constituent parts of the container appear in
the Job Diagram window, with existing stages and links shifted to accommodate
them.
If any name conflicts arise during the deconstruction process between stages from
the container and existing ones, you are prompted for new names. You can select
the Use Generated Names checkbox to have InfoSphere DataStage allocate new
names automatically from then on. If the container has any unconnected links,
these are discarded. Connected links remain connected.
Shared containers
Shared containers help you to simplify your design but, unlike local containers,
they are reusable by other jobs.
You can use shared containers to make common job components available
throughout the project. You can create a shared container from a stage and
associated metadata and add the shared container to the palette to make this
pre-configured stage available to other jobs.
Shared containers comprise groups of stages and links and are stored in the
Repository like IBM InfoSphere DataStage jobs. When you insert a shared container
into a job, InfoSphere DataStage places an instance of that container into the
design. When you compile the job containing an instance of a shared container, the
code for the container is included in the compiled job. You can use the InfoSphere
DataStage debugger on instances of shared containers used within server jobs.
When you add an instance of a shared container to a job, you will need to map
metadata for the links into and out of the container, as these can vary in each job
in which you use the shared container. If you change the contents of a shared
container, you will need to recompile those jobs that use the container in order for
the changes to take effect. For parallel shared containers, you can take advantage
of runtime column propagation to avoid the need to map the metadata. If you
enable runtime column propagation, then, when the jobs runs, metadata will be
automatically propagated across the boundary between the shared container and
the stage(s) to which it connects in the job.
Note that there is nothing inherently parallel about a parallel shared container -
although the stages within it have parallel capability. The stages themselves
determine how the shared container code will run. Conversely, when you include a
server shared container in a parallel job, the server stages have no parallel
capability, but the entire container can operate in parallel because the parallel job
can execute multiple instances of it.
You can create a shared container from scratch, or place a set of existing stages and
links within a shared container.
Note: If you encounter a problem when running a job which uses a server shared
container in a parallel job, you could try increasing the value of the
DSIPC_OPEN_TIMEOUT environment variable in the Parallel Operator specific
category of the environment variable dialog box in the InfoSphere DataStage
Administrator.
Procedure
1. Choose the stages and links by doing one of the following:
v Click and drag the mouse over all the stages you want in the container.
v Select a stage. Press Shift and click the other stages and links you want to
add to the container.
All the chosen stages are highlighted in the system highlight color.
2. Choose Edit > Construct Container > Shared. You are prompted for a name for
the container by the Create New dialog box. The group is replaced by a Shared
Container stage of the appropriate type with the specified name in the Diagram
window. You are warned if any link naming conflicts occur when the container
is constructed. Any parameters occurring in the components are copied to the
shared container as container parameters. The instance created has all its
parameters assigned to corresponding job parameters.
To create an empty shared container, to which you can add stages and links,
choose File > New on the Designer menu. The New dialog box appears, open the
Shared Container folder and choose the parallel shared container icon or server
shared container icon as appropriate and click OK.
A new Diagram window appears in the Designer, along with a Tool palette which
has the same content as for parallel jobs or server jobs, depending on the type of
shared container. You can now save the shared container and give it a name. This
is exactly the same as saving a job (see “Saving a job” on page 22).
The following rules apply to the names that you can give IBM InfoSphere
DataStage shared containers:
v Container names can be any length.
v They must begin with an alphabetic character.
v They can contain alphanumeric characters.
A Diagram window appears, showing the contents of the shared container. You can
edit the stages and links in a container in the same way you do for a job.
Note: The shared container is edited independently of any job in which it is used.
Saving a job, for example, will not save any open shared containers used in that
job.
To edit the properties, ensure that the shared container diagram window is open
and active and choose Edit Properties. If the shared container is not currently
open, select it in the Repository window and choose Properties from the shortcut
menu. The Shared Container Properties dialog box appears. This has two pages,
General and Parameters.
IBM InfoSphere DataStage inserts an instance of that shared container into the job
design. This is the same for both server jobs and parallel jobs.
The stages in the job that connect to the container are represented within the
container by input and output stages, in the same way as described for local
containers (see Using Input and Output Stages). Unlike on a local container,
however, the links connecting job stages to the container are not expected to have
the same name as the links within the container.
Once you have inserted the shared container, you need to edit its instance
properties by doing one of the following:
v Double-click the container stage in the Diagram window.
v Select the container stage and choose Edit Properties... .
This is similar to a general stage editor, and has Stage, Inputs, and Outputs pages,
each with subsidiary tabs.
Stage page
All stage editors have a stage page. The page contains various fields that describe
the stage.
v Stage Name. The name of the instance of the shared container. You can edit this
if required.
v Shared Container Name. The name of the shared container of which this is an
instance. You cannot change this.
The General tab enables you to add an optional description of the container
instance.
The Properties tab allows you to specify values for container parameters. You need
to have defined some parameters in the shared container properties for this tab to
appear.
v Name. The name of the expected parameter.
v Value. Enter a value for the parameter. You must enter values for all expected
parameters here as the job does not prompt for these at run time. (You can leave
string parameters blank, an empty string will be inferred.)
v Insert Parameter. You can use a parameter from a parent job (or container) to
supply a value for a container parameter. Click Insert Parameter to be offered a
list of available parameters from which to choose.
The Advanced tab appears when you are using a server shared container within a
parallel job. It has the same fields and functionality as the Advanced tab on all
parallel stage editors.
Inputs page
When inserted in a job, a shared container instance already has metadata defined
for its various links.
This metadata must match that on the link that the job uses to connect to the
container exactly in all properties. The inputs page enables you to map metadata
as required. The only exception to this is where you are using runtime column
propagation (RCP) with a parallel shared container. If RCP is enabled for the job,
and specifically for the stage whose output connects to the shared container input,
then metadata will be propagated at run time, so there is no need to map it at
design time.
In all other cases, in order to match, the metadata on the links being matched must
have the same number of columns, with corresponding properties for each.
The Inputs page for a server shared container has an Input field and two tabs,
General and Columns. The Inputs page for a parallel shared container, or a server
shared container used in a parallel job, has an additional tab: Partitioning.
v Input. Choose the input link to the container that you want to map.
Note: You can use a Transformer stage within the job to manually map data
between a job stage and the container stage in order to supply the metadata that
the container requires.
v Description. Optional description of the job input link.
The Columns page shows the metadata defined for the job stage link in a standard
grid. You can use the Reconcile option on the Load button to overwrite metadata
on the job stage link with the container link metadata in the same way as
described for the Validate option.
The Partitioning tab appears when you are using a server shared container within
a parallel job. It has the same fields and functionality as the Partitioning tab on all
parallel stage editors.
The Advanced tab appears for parallel shared containers and when you are using a
server shared container within a parallel job. It has the same fields and
functionality as the Advanced tab on all parallel stage editors.
Outputs page
The Outputs page enables you to map metadata between a container link and the
job link which connects to the container on the output side.
It has an Outputs field and a General tab, Columns tab, and Advanced tab, that
perform equivalent functions as described for the Inputs page.
The columns tab for parallel shared containers has a Runtime column propagation
check box. This is visible provided RCP is enabled for the job. It shows whether
RCP is switched on or off for the link the container link is mapped onto. This
removes the need to map the metadata.
Pre-configured components
You can use shared containers to make pre-configured stages available to other
jobs.
Procedure
1. Select a stage and relevant input/output link (you need the link too in order to
retain metadata).
2. Choose Copy from the shortcut menu, or select Edit Copy.
3. Select Edit Paste special Into new shared container... . The Paste Special into
new Shared Container dialog box appears).
4. Choose to create an entry for this container in the palette (the dialog will do
this by default).
To use the pre-configured component, select the shared container in the palette and
Ctrl+drag it onto canvas. This deconstructs the container so the stage and link
appear on the canvas.
Converting containers
You can convert local containers to shared containers and vice versa.
By converting a local container to a shared one you can make the functionality
available to all jobs in the project.
You might want to convert a shared container to a local one if you want to slightly
modify its functionality within a job. You can also convert a shared container to a
local container and then deconstruct it into its constituent parts as described in
“Deconstructing a local container” on page 97.
To convert a container, select its stage icon in the job Diagram window and either
click Convert from the shortcut menu, or click Edit > Convert Container from the
main menu.
Containers nested within the container you are converting are not affected.
When converting from shared to local, you are warned if link name conflicts occur
and given a chance to resolve them.
Parallel routines
Parallel jobs can execute routines before or after a processing stage executes (a
processing stage being one that takes input, processes it then outputs it in a single
stage), or can use routines in expressions in Transformer stages.
These routines are defined and stored in the repository, and then called in the
Triggers page of the particular Transformer stage Properties dialog box. These
routines must be supplied in a shared library or an object file, and do not return a
value (any values returned are ignored).
Procedure
1. Do one of:
a. Choose File > New from the Designer menu. The New dialog box appears.
b. Open the Routine folder and select the Parallel Routine icon.
c. Click OK. The Parallel Routine dialog box appears, with the General page
on top.
Or:
d. Select a folder in the repository tree.
e. Choose New > Parallel Routine from the pop-up menu. The Parallel
Routine dialog box appears, with the General page on top.
2. Enter general information about the routine as follows:
v Routine name. Type the name of the routine. Routine names can be any
length. They must begin with an alphabetic character and can contain
alphanumeric and period characters.
v Type. Choose External Function if this routine is calling a function to include
in a transformer expression. Choose External Before/After Routine if you are
defining a routine to execute as a processing stage before/after routine.
v Object Type. Choose Library or Object. This option specifies how the C
function is linked in the job. If you choose Library, the function is not linked
into the job and you must ensure that the shared library is available at run
time. For the Library invocation method, the routine must be provided in a
There are three different types of stage that you can define:
v Custom. This allows knowledgeable Orchestrate users to specify an Orchestrate
operator as an IBM InfoSphere DataStage stage. This is then available to use in
Parallel jobs.
v Build. This allows you to design and build your own operator as a stage to be
included in Parallel Jobs.
v Wrapped. This allows you to specify a UNIX command to be executed by a
stage. You define a wrapper file that in turn defines arguments for the UNIX
command and inputs and outputs.
The Designer client provides an interface that allows you to define a new Parallel
job stage of any of these types.
The stage will be available to all jobs in the project in which the stage was defined.
You can make it available to other projects using the Designer Export/Import
facilities. The stage is automatically added to the job palette.
Procedure
1. Do one of:
a. Click File > New on the Designer menu. The New dialog box appears.
b. Open the Other folder and select the Parallel Stage Type icon.
c. Click OK. The Parallel Routine dialog box appears, with the General page
on top.
Or:
d. Select a folder in the repository tree.
e. Click New > Other > Parallel Stage > Custom on the shortcut menu. The
Stage Type dialog box appears, with the General page on top.
2. Fill in the fields on the General page as follows:
v Stage type name. This is the name that the stage will be known by to IBM
InfoSphere DataStage. Avoid using the same name as existing stages.
v Parallel Stage type. This indicates the type of new Parallel job stage you are
defining (Custom, Build, or Wrapped). You cannot change this setting.
v Execution Mode. Choose the execution mode. This is the mode that will
appear in the Advanced tab on the stage editor. You can override this mode
for individual instances of the stage as required, unless you select Parallel
only or Sequential only.
v Mapping. Choose whether the stage has a Mapping tab or not. A Mapping
tab enables the user of the stage to specify how output columns are derived
from the data produced by the stage. Choose None to specify that output
mapping is not performed, choose Default to accept the default setting that
InfoSphere DataStage uses.
v Preserve Partitioning. Choose the default setting of the Preserve Partitioning
flag. This is the setting that will appear in the Advanced tab on the stage
editor. You can override this setting for individual instances of the stage as
required.
v Partitioning. Choose the default partitioning method for the stage. This is
the method that will appear in the Inputs page Partitioning tab of the stage
editor. You can override this method for individual instances of the stage as
required.
The stage will be available to all jobs in the project in which the stage was defined.
You can make it available to other projects using the IBM InfoSphere DataStage
Export facilities. The stage is automatically added to the job palette.
Note that the custom operator that your build stage executes must have at least
one input data set and one output data set.
The Code for the Build stage is specified in C++. There are a number of macros
available to make the job of coding simpler. There are also a number of header
files available containing many useful functions. .
Procedure
1. Do one of:
a. Choose File New from the Designer menu. The New dialog box appears.
b. Open the Other folder and select the Parallel Stage Type icon.
c. Click OK. The Parallel Routine dialog box appears, with the General page
on top.
Or:
d. Select a folder in the repository tree.
e. Choose New > Other > Parallel Stage > Build from the shortcut menu. The
Stage Type dialog box appears, with the General page on top.
2. Fill in the fields on the General page as follows:
v Stage type name. This is the name that the stage will be known by to
InfoSphere DataStage. Avoid using the same name as existing stages.
v Class Name. The name of the C++ class. By default this takes the name of
the stage type.
v Parallel Stage type. This indicates the type of new parallel job stage you are
defining (Custom, Build, or Wrapped). You cannot change this setting.
v Execution mode. Choose the default execution mode. This is the mode that
will appear in the Advanced tab on the stage editor. You can override this
mode for individual instances of the stage as required, unless you select
Parallel only or Sequential only.
You define a wrapper file that handles arguments for the UNIX command and
inputs and outputs. The Designer provides an interface that helps you define the
The UNIX command that you wrap can be a built-in command, such as grep, a
third-party utility, or your own UNIX application. The only limitation is that the
command must be `pipe-safe' (to be pipe-safe a UNIX command reads its input
sequentially, from beginning to end).
You need to define metadata for the data being input to and output from the stage.
You also need to define the way in which the data will be input or output. UNIX
commands can take their inputs from standard in, or another stream, a file, or
from the output of another command via a pipe. Similarly data is output to
standard out, or another stream, to a file, or to a pipe to be input to another
command. You specify what the command expects.
InfoSphere DataStage handles data being input to the Wrapped stage and will
present it in the specified form. If you specify a command that expects input on
standard in, or another stream, InfoSphere DataStage will present the input data
from the jobs data flow as if it was on standard in. Similarly it will intercept data
output on standard out, or another stream, and integrate it into the job's data flow.
You also specify the environment in which the UNIX command will be executed
when you define the wrapped stage.
Procedure
1. Do one of:
a. Choose File > New from the Designer menu. The New dialog box appears.
b. Open the Other folder and select the Parallel Stage Type icon.
c. Click OK. The Parallel Routine dialog box appears, with the General page
on top.
Or:
d. Select a folder in the repository tree.
2. Choose New > Other > Parallel Stage > Wrapped from the shortcut menu. The
Stage Type dialog box appears, with the General page on top.
3. Fill in the fields on the General page as follows:
v Stage type name. This is the name that the stage will be known by to
InfoSphere DataStage. Avoid using the same name as existing stages or the
name of the actual UNIX command you are wrapping.
v Category. The category that the new stage will be stored in under the stage
types branch. Type in or browse for an existing category or type in the name
of a new one. The category also determines what group in the palette the
stage will be added to. Choose an existing category to add to an existing
group, or specify a new category to create a new palette group.
Server routines
You can define your own custom routines that can be used in various places in
your server job designs.
Server routines are stored in the repository, where you can create, view, or edit
them using the Routine dialog box. The following program components are
classified as routines:
v Transform functions. These are functions that you can use when defining custom
transforms. IBM InfoSphere DataStage has a number of built-in transform
functions but you can also define your own transform functions in the Routine
dialog box.
v Before/After subroutines. When designing a job, you can specify a subroutine to
run before or after the job, or before or after an active stage. InfoSphere
DataStage has a number of built-in before/after subroutines but you can also
define your own before/after subroutines using the Routine dialog box.
v Custom UniVerse functions. These are specialized BASIC functions that have
been defined outside InfoSphere DataStage. Using the Routine dialog box, you
can get InfoSphere DataStage to create a wrapper that enables you to call these
functions from within InfoSphere DataStage. These functions are stored under
the Routines branch in the Repository. You specify the category when you create
the routine. If NLS is enabled, you should be aware of any mapping
requirements when using custom UniVerse functions. If a function uses data in a
particular character set, it is your responsibility to map the data to and from
Unicode.
v ActiveX (OLE) functions. You can use ActiveX (OLE) functions as programming
components within InfoSphere DataStage. Such functions are made accessible to
InfoSphere DataStage by importing them. This creates a wrapper that enables
you to call the functions. After import, you can view and edit the BASIC
wrapper using the Routine dialog box. By default, such functions are located in
the Routines Class name branch in the Repository, but you can specify your
own category when importing the functions.
v Web Service routines. You can use operations imported from a web service as
programming components within InfoSphere DataStage. Such routines are
created by importing from a web service WSDL file.
When using the Expression Editor in the server job, all of these components appear
under the DS Routines... command on the Suggest Operand menu.
Note: This page is not available if you selected Custom UniVerse Function on
the General page.
6. When you are happy with your code, you should save, compile and test it (see
"Saving Code", "Compiling Code", and "Testing a Routine").
7. Select the Dependencies page to define the dependencies of your routine.
The Dependencies page allows you to enter any locally or globally cataloged
functions or routines that are used in the routine you are defining. This is to
ensure that, when you package any jobs using this routine for deployment on
another system, all the dependencies will be included in the package. The
information required is as follows:
v Type. The type of item upon which the routine depends. Choose from the
following:
Local Locally cataloged BASIC functions and subroutines.
Global Globally cataloged BASIC functions and subroutines.
File A standard file.
ActiveX An ActiveX (OLE) object (not available on UNIX- based systems).
Web service A web service.
v Name. The name of the function or routine. The name required varies
according to the type of dependency:
Local The catalog name.
Global The catalog name.
File The file name.
ActiveX The Name entry is actually irrelevant for ActiveX objects. Enter
something meaningful to you (ActiveX objects are identified by the Location
field).
v Location. The location of the dependency. A browse dialog box is available to
help with this. This location can be an absolute path, but it is recommended
you specify a relative path using the following environment variables:
%SERVERENGINE% - Server engine account directory (normally
C:\IBM\InformationServer\Server\DSEngine on Windows and
/opt/IBM/InformationServer/Server/DSEngine on UNIX).
%PROJECT% - Currentproject directory.
%SYSTEM% - System directory on Windows or /usr/lib on UNIX.
Entering code:
You can enter or edit code for a routine on the Code page in the Server Routine
dialog box.
The first field on this page displays the routine name and the argument names. If
you want to change these properties, you must edit the fields on the General and
Arguments pages.
The main part of this page contains a multiline text entry box, in which you must
enter your code. To enter code, click in the box and start typing. You can use the
following standard Windows edit functions in this text box:
v Delete using the Del key
Some of these edit functions are included in a shortcut menu which you can
display by right clicking. You can also cut, copy, and paste code using the buttons
in the toolbar.
Your code must only contain BASIC functions and statements supported by IBM
InfoSphere DataStage.
If you want to format your code, click the Format button on the toolbar.
The return field on this page displays the return statement for the function or
subroutine. You cannot edit this field.
Saving code:
When you have finished entering or editing your code, the routine must be saved.
A routine cannot be compiled or tested if it has not been saved. To save a routine,
click Save in the Server Routine dialog box. The routine properties (its name,
description, number of arguments, and creator information) and the associated
code are saved in the Repository.
Compiling code:
When you have saved your routine, you must compile it.
To compile a routine, click Compile... in the Server Routine dialog box. The status
of the compilation is displayed in the lower window of the Server Routine dialog
box. If the compilation is successful, the routine is marked as "built" in the
Repository and is available for use. If the routine is a Transform Function, it is
displayed in the list of available functions when you edit a transform. If the
routine is a Before/After Subroutine, it is displayed in the drop-down list box of
available subroutines when you edit an Aggregator, Transformer, or plug-in stage,
or define job properties.
If NLS is enabled, watch for multiple question marks in the Compilation Output
window. This generally indicates that a character set mapping error has occurred.
When you have modified your code, click Save then Compile... . If necessary,
continue to troubleshoot any errors, until the routine compiles successfully.
Once the routine is compiled, you can use it in other areas of InfoSphere DataStage
or test it.
Testing a routine:
Before using a compiled routine, you can test it using the Test... button in the
Server Routine dialog box.
The Test... button is activated when the routine has been successfully compiled.
Note: The Test... button is not available for a Before/After Subroutine. Routines of
this type cannot be tested in isolation and must be executed as part of a running
job.
When you click Test..., the Test Routine dialog box appears:
This dialog box contains a grid and buttons. The grid has a column for each
argument and one for the test result.
You can add and edit rows in the grid to specify the values for different test cases.
To run a test with a chosen set of values, click anywhere in the row you want to
use and click Run. If you want to run tests using all the test values, click Run All.
The Result... column is populated as each test is completed.
To see more details for a particular test, double-click the Result... cell for the test
you are interested in. The Test Output window appears, displaying the full test
results:
If you want to delete a set of test values, click anywhere in the row you want to
remove and press the Delete key or choose Delete row from the shortcut menu.
When you have finished testing the routine, click Close to close the Test Routine
dialog box. Any test values you entered are saved when you close the dialog box.
As well as editing routines that you have created yourself, you can also edit
routines that were created by IBM InfoSphere DataStage when you imported
ActiveX functions or Web services routines. You can edit the BASIC wrapper code
that was created to run these routines as part of a job (to edit the routines
themselves, you would need to edit them outside of InfoSphere DataStage and
re-import them).
Copying a routine
You can copy an existing routine using the Designer.
Procedure
1. Select it in the repository tree
2. Choose Create copy from the shortcut menu.
Results
The routine is copied and a new routine is created in the same folder in the project
tree. By default, the name of the copy is called CopyOfXXX, where XXX is the
name of the chosen routine. An edit box appears allowing you to rename the copy
immediately. The new routine must be compiled before it can be used.
Custom transforms
You can create, view or edit custom transforms for server jobs using the Transform
dialog box.
Transforms specify the type of data transformed, the type it is transformed into,
and the expression that performs the transformation.
The IBM InfoSphere DataStage Expression Editor helps you to enter correct
expressions when you define custom transforms in the InfoSphere DataStage
Director. The Expression Editor can:
v Facilitate the entry of expression elements
v Complete the names of frequently used variables
v Validate variable names and the complete expression
When you are entering expressions, the Expression Editor offers choices of
operands and operators from context-sensitive shortcut menus.
When using the Expression Editor, the transforms appear under the DS
Transform... command on the Suggest Operand menu.
Transforms are used in the Transformer stage to convert your data to a format you
want to use in the final data mart. Each transform specifies the BASIC function
used to convert the data from one type to another. There are a number of built-in
To provide even greater flexibility, you can also define your own custom routines
and functions from which to build custom transforms. There are three ways of
doing this:
v Entering the code within InfoSphere DataStage (using BASIC functions).
v Creating a reference to an externally cataloged routine.
v Importing external ActiveX (OLE) functions or web services routines.
Procedure
1. Do one of:
a. Choose File > New from the Designer menu. The New dialog box appears.
b. Open the Other folder and select the Transform icon.
c. Click OK. The Transform dialog box appears, with the General page on top.
Or:
d. Select a folder in the repository tree.
e. Choose New > Other > Transform from the shortcut menu. The Transform
dialog box appears, with the General page on top. This dialog box has two
pages:
f. General. Displayed by default. Contains general information about the
transform.
g. Details. Allows you to specify source and target data elements, the function,
and arguments to use.
2. Enter the name of the transform in the Transform name field. The name
entered here must be unique; as no two transforms can have the same name.
Also note that the transform should not have the same name as an existing
BASIC function; if it does, the function will be called instead of the transform
when you run the job.
3. Optionally enter a brief description of the transform in the Short description
field.
4. Optionally enter a detailed description of the transform in the Long description
field. Once this page is complete, you can specify how the data is converted.
5. Click the Details tab. The Details page appears at the front of the Transform
dialog box.
6. Optionally choose the data element you want as the target data element from
the Target data element list box. (Using a target and a source data element
allows you to apply a stricter data typing to your transform. See "Data
Elements" for a description of data elements.)
7. Specify the source arguments for the transform in the Source Arguments grid.
Enter the name of the argument and optionally choose the corresponding data
element from the drop-down list.
Results
You can then use the new transform from within the Transformer Editor.
Note: If NLS is enabled, avoid using the built-in Iconv and Oconv functions to
map data unless you fully understand the consequences of your actions.
The Transform dialog box appears. You can edit any of the fields and options on
either of the pages.
Procedure
1. Select it in the repository tree
2. Choose Create copy from the shortcut menu.
Results
The transform is copied and a new transform is created in the same folder in the
project tree. By default, the name of the copy is called CopyOfXXX, where XXX is
the name of the chosen transform. An edit box appears allowing you to rename the
copy immediately.
Data elements
Each column within a table definition can have a data element assigned to it. A
data element specifies the type of data a column contains, which in turn
determines the transforms that can be applied in a Transformer stage.
The use of data elements is optional. You do not have to assign a data element to a
column, but it enables you to apply stricter data typing in the design of server
jobs. The extra effort of defining and applying data elements can pay dividends in
effort saved later on when you are debugging your design.
You can choose to use any of the data elements supplied with IBM InfoSphere
DataStage, or you can create and use data elements specific to your application.
For a list of the built-in data elements, see "Built-In Data Elements".
For example, if you have a column containing a numeric product code, you might
assign it the built-in data element Number. There is a range of built-in transforms
associated with this data element. However, all of these would be unsuitable, as it
is unlikely that you would want to perform a calculation on a product code. In this
case, you could create a new data element called PCode.
Each data element has its own specific set of transforms which relate it to other
data elements. When the data elements associated with the columns of a target
table are not the same as the data elements of the source data, you must ensure
that you have the transforms needed to convert the data as required. For each
target column, you should have either a source column with the same data
element, or a source column that you can convert to the required data element.
For example, suppose that the target table requires a product code using the data
element PCode, but the source table holds product data using an older product
numbering scheme. In this case, you could create a separate data element for
old-format product codes called Old_PCode, and you then create a custom
transform to link the two data elements; that is, its source data element is
Old_PCode, while its target data element is PCode. This transform, which you
could call Convert_PCode, would convert an old product code to a new product
code.
A data element can also be used to "stamp" a column with SQL properties when
you manually create a table definition or define a column definition for a link in a
job.
Procedure
1. Do one of:
a. Choose File > New from the Designer menu. The New dialog box appears.
b. Open the Other folder and select the Data Element icon.
c. Click OK. The Parallel Routine dialog box appears, with the General page
on top.
Or:
d. Select a folder in the repository tree.
e. Choose New > Other > Data Element from the shortcut menu. The Data
Element dialog box appears, with the General page on top.
This dialog box has four pages:
f. General. Displayed by default. Contains general information about the data
element.
g. SQL Properties. Contains fields that describe the properties of the
associated SQL data type. This page is used when this data element is used
to manually create a new column definition for use with an SQL data
source. If you import the column definition from an SQL data source, the
SQL properties are already defined.
Results
You must edit your table definition to assign this new data element.
Data element category names can be any length and consist of any characters,
including spaces.
Data elements are assigned by editing the column definitions which are then used
in your InfoSphere DataStage job, or you can assign them in individual stages as
Procedure
1. Select the table definition you want in the repository tree, and do one of the
following:
v Choose Properties... from the shortcut menu.
v Double-click the table definition in the tree.
The Table Definition dialog box appears.
2. Click the Columns tab. The Columns page appears at the front of the Table
Definition dialog box.
3. Click the Data element cell for the column definition you want to edit.
4. Choose the data element you want to use from the drop-down list. This list
contains all the built-in data elements supplied with InfoSphere DataStage and
any data elements you created. For a description of the built-in data elements
supplied with InfoSphere DataStage, see "Built-In Data Elements".
5. Click OK to save the column definition and to close the Table Definition dialog
box.
To view the properties of a data element, select it in the repository tree and do one
of the following:
v Choose Properties... from the shortcut menu.
v Double-click on it.
The Data Element dialog box appears. Click OK to close the dialog box.
If you are viewing the properties of a data element that you created, you can edit
any of the fields on the General or SQL Properties page. The changes are saved
when you click OK.
If you are viewing the properties of a built-in data element, you cannot edit any of
the settings on the General or SQL Properties page.
Procedure
1. Select it in the repository tree
2. Choose Create copy from the shortcut menu.
Results
The data element is copied and a new transform is created in the same folder in
the project tree. By default, the name of the copy is called CopyOfXXX, where
XXX is the name of the chosen data element. An edit box appears allowing you to
rename the copy immediately.
There are six data elements that represent each of the base types used internally by
InfoSphere DataStage:
v Date. The column contains a date, represented in InfoSphere DataStage internal
format. There are many built-in transforms available to convert dates to
character strings.
v Number. The column contains a numeric value.
v String. The column contains data as a string of characters. InfoSphere DataStage
interprets the string as a number if needed.
v Time. The column contains data as a time.
v Default. The data has an SQL data type already assigned and the most
appropriate base type is used.
v Timestamp. The column contains a string that represents a combined date/time:
YYYY-MM-DD HH:MM:SS
In addition, there are some data elements that are used to express dates in
alternative ways:
v DATE.TAG. The data specifies a date and is stored in the following format:
1993-02-14 (February 14, 1993)
v WEEK.TAG. The data specifies a week and is stored in the following format:
1993W06 (week 6 of 1993)
v MONTH.TAG. The data specifies a month and is stored in the following format:
1993-02 (February 1993)
v QUARTER.TAG. The data specifies a quarter and is stored in the following
format:
1993Q1 (quarter 1, 1993)
v YEAR.TAG. The data specifies a year and is stored in the following format:
1993
Each of these data elements has a base type of String. The format of the date
complies with various ISO 8601 date formats.
You can view the properties of these data elements. You cannot edit them.
Mainframe routines
You can define a mainframe routine to help you design parallel jobs to transform
data.
The External Routine stage in a mainframe job enables you to call a COBOL
subroutine that exists in a library external to InfoSphere DataStage in your job. You
must first define the routine, details of the library, and its input and output
arguments. The routine definition is stored in the metadata repository and can be
referenced from any number of External Routine stages in any number of
mainframe jobs.
The External Source stage in a mainframe job allows you to read data from file
types that are not supported in InfoSphere DataStage MVS™ Edition. After you
write an external source program, you create an external source routine in the
metadata repository. The external source routine specifies the attributes of the
external source program.
The External Target stage in a mainframe job allows you to write data to file types
that are not supported in InfoSphere DataStage MVS Edition. After you write an
external target program, you create an external target routine in the metadata
repository. The external target routine specifies the attributes of the external target
program.
When you create, view, or edit a mainframe routine, the Mainframe Routine dialog
box appears. This dialog box has up to four pages: General, Creator, and
Arguments, plus a JCL page if you are editing an External Source or External
Target routine.
Naming routines:
Routine names can be one to eight characters in length. They must begin with an
alphabetic character.
Creating a routine
You can create routines in the Designer client.
To view or modify a routine, select it in the repository tree and do one of the
following:
v Choose Properties... from the shortcut menu.
v Double-click on it.
The Routine dialog box appears. You can edit any of the fields and options on any
of the pages.
Copying a routine
You can copy an existing routine using the Designer.
Procedure
1. Select it in the repository tree
2. Choose Create copy from the shortcut menu.
Results
The routine is copied and a new routine is created in the same folder in the project
tree. By default, the name of the copy is called CopyOfXXX, where XXX is the
name of the chosen routine. An edit box appears allowing you to rename the copy
immediately.
Renaming a routine
You can rename any user-written routines using the Designer.
To rename an item, select it in the repository tree and do one of the following:
v Click the routine again. An edit box appears and you can enter a different name
or edit the existing one. Save the new name by pressing Enter or by clicking
outside the edit box.
v Choose Rename from the shortcut menu. An edit box appears and you can enter
a different name or edit the existing one. Save the new name by pressing Enter
or by clicking outside the edit box.
v Double-click the routine. The Mainframe Routine dialog box appears and you
can edit the Routine name field. Click Save, then Close.
They are also used by the mainframe FTP stage. They provide a reuseable way of
defining the mainframe IBM InfoSphere DataStage is uploading code or FTPing to.
You can create mainframe machine profiles and store them in the InfoSphere
DataStage repository. You can create, copy, rename, move, and delete them in the
same way as other repository objects.
Procedure
1. Do one of:
a. Choose File New from the Designer menu. The New dialog box appears.
b. Open the Other folder and select the Machine Profile icon.
c. Click OK. The Machine Profile dialog box appears, with the General page
on top.
Or:
d. Select a folder in the repository tree.
e. Choose New > Other > Machine Profile from the shortcut menu. The
Machine Profile dialog box appears, with the General page on top.
2. Supply general details as follows:
a. Enter the name of the machine profile in the Machine profile name field.
The name entered here must be unique as no two machine profiles can have
the same name.
b. Choose the type of platform for which you are defining a profile from the
Platform type drop-down list.
c. Optionally enter a brief description of the profile in the Short description
field.
d. Optionally enter a detailed description of the data in the Long description
field. This description is displayed only when you view the properties of a
machine profile.
3. Click the Connection tab to go to the Connection page.
Fill in the fields as follows:
v Specify the IP Host name/address for the machine.
v Specify the Port to connect to. The default port number is 21.
v Choose an Ftp transfer type of ASCII or Binary.
v Specify a user name and password for connecting to the machine. The
password is stored in encrypted form.
v Click Active or Passive as appropriate for the FTP service.
v If you are generating process metadata from mainframe jobs, specify the
target directory and dataset name for the XML file which will record the
operational metadata.
4. Click the Libraries tab to go to the Libraries page.
Fill in the fields as follows:
v In Source library specify the destination for the generated code.
v In Compile JCL library specify the destination for the compile JCL file.
v In Run JCL library specify the destination for the run JCL file.
These facilities are available if you have InfoSphere DataStage MVS Edition
installed along with the IMS Source package.
A DBD file defines the physical structure of an IMS database. A PSB file defines an
application's view of an IMS database.
To open an IMS database or viewset for editing, select it in the repository tree and
do one of the following:
v Choose Properties from the shortcut menu.
v Double-click the item in the tree.
Depending on the type of IMS item you selected, either the IMS Database dialog
box appears or the IMS Viewset dialog box appears. Remember that, if you edit the
definitions, this will not affect the actual database it describes.
The IMS Database editor allows you to view, edit, or create IMS database objects.
The IMS Viewset editor allows you to view, edit, or create IMS viewset objects.
This dialog box is divided into two panes. The left pane contains a tree structure
displaying the IMS viewset (PSB), its views (PCBs), and the sensitive segments.
The right pane displays the properties of selected items. It has up to three pages
depending on the type of item selected:
v Viewset. Properties are displayed on one page and include the PSB name. This
field is read-only. You can optionally enter short and long descriptions.
v View. There are two pages for view properties:
– General. Displays the PCB name, DBD name, type, and an optional
description. If you did not create associated tables during import or you want
to change which tables are associated with PCB segments, click the
Segment/Table Mapping... button. The Segment/Associated Table Mapping
dialog box appears.
To create a table association for a segment, select a table in the left pane and
drag it to the segment in the right pane. The left pane displays available
tables in the Repository which are of type QSAM_SEQ_COMPLEX. The right
pane displays the segment names and the tables currently associated with
them; you can right-click to clear one or all of the current table mappings.
Click OK when you are done with the mappings, or click Cancel to discard
any changes you have made and revert back to the original table associations.
– Hierarchy. Displays the PCB segment hierarchy in a read-only diagram. You
can right-click to view the hierarchy in detailed mode.
v Sensitive Segment. There are three pages for sensitive segment properties:
– General. Displays the segment name and its associated table. If you want to
change the associated table, click the browse button next to the Associate
table field to select another table.
– Sen Fields. Displays the sensitive fields associated with the sensitive segment.
These fields are read-only.
You configure the job to specify all the properties for the individual stages you
have sketched. You also specify the data that flows down the links that connect the
stages.
Each job type supports many different types of stage, each with many properties.
Details of these stages and their properties can be found in the following places:
v Developing parallel DataStage and QualityStage jobs
v Developing server jobs
v Developing mainframe jobs
There are also job properties that you can set. These control how the whole job
behaves when it runs.
When you design a job, you specify the column definitions for the data being read
and processed. When runtime column propagation is enabled, the job will pass all
the columns that it encounters in the data along the data flow, regardless of
whether these columns have been defined in your data flow. You can design jobs
that just specify a subset of the columns, but all the columns in the data will be
passed through the stages. If runtime column propagation is not enabled, you have
to specify all the columns in the data, otherwise the extra columns are dropped
when the job runs.
Runtime column propagation must be enabled for the project in the Administrator
client. You can then turn runtime column propagation on or off for individual jobs
on the General page.
NLS page
This page only appears if you have NLS installed with IBM InfoSphere DataStage.
It allows you to ensure that InfoSphere DataStage uses the correct character set
map and collate formatting rules for your parallel job
The character set map defines the character set InfoSphere DataStage uses for this
job. You can select a specific character set map from the list or accept the default
setting for the whole project.
The locale determines the order for sorted data in the job. Select the project default
or choose one from the list.
The Defaults page shows the current defaults for date, time, timestamp, and
decimal separator. To change the default, clear the corresponding Project default
check box, then either select a new format from the drop-down list or type in a
new format.
Use the Message Handler for Parallel Jobs field to select a message handler to be
included in the job. The message handler is compiled with the job and become part
of its executable. The drop-down list offers a choice of currently defined message
handlers.
The NLS page only appears if you have NLS enabled on your system.
The list contains all character set maps that are loaded and ready for use. You can
view other maps that are supplied by clicking Show all maps, but these maps
cannot be used unless they are loaded using the Administrator client.
In most cases you should use the same locale for every category to ensure that the
data is formatted consistently.
You can improve the performance of the job by specifying the way the system
divides jobs into processes.
You can improve performance where you are running multiple instances of a job
by enabling hashed file cache sharing.
These settings can also be made on a project-wide basis using the Administrator
client.
Note: You cannot use row-buffering of either sort if your job uses COMMON
blocks in transform functions to pass data between stages. This is not
recommended practice, and it is advisable to redesign your job to use row
buffering rather than COMMON blocks.
v Buffer size. Specifies the size of the buffer used by in-process or inter-process
row buffering. Defaults to 128 Kb.
v Timeout. This option only applies when inter-process row buffering is used. Use
the option to specify the time one process will wait to communicate with
another via the buffer before timing out. Defaults to 10 seconds.
v Enable hashed file cache sharing. This option is set on the General page of the
Job Properties window. Select this option to enable multiple processes to access
the same hash file in cache (the system checks if this is appropriate). This can
Note: Mainframe jobs are not supported in this version of IBM InfoSphere
Information Server.
The Job Properties dialog box for mainframe jobs has the following pages:
v General Page. Allows you to specify before and after job routines for the job,
enable or disable various job-wide features, and enter short and long
descriptions of the job.
v Parameters Page. Allows you to specify job parameters. Job parameters allow
you to specify certain values used by a job at run time.
v Environment Page. Allows you to specify information that is used when code is
generated for mainframe jobs.
v Extension Page. If you have customized the JCL templates and added extension
variables, allows you to supply values for these variables.
v Operational Metadata Page. If your job is going to generate operational
metadata, you can specify the details of how the metadata will be handled on
this page
Click OK to record your changes in the job design. Changes are not saved to the
Repository until you save the job design.
Instead of entering inherently variable factors as part of the job design you can set
up parameters which represent processing variables.
For mainframe jobs the parameter values are placed in a file that is accessed when
the job is compiled and run on the mainframe.
Job parameters are defined, edited, and deleted in the Parameters page of the Job
Properties dialog box.
All job parameters are defined by editing the empty row in the Job Parameters
grid.
Note: Before you remove a job parameter definition, you must make sure that you
remove the references to this parameter in your job design. If you do not do this,
your job might fail to run.
The mainframe job Parameters page has these fields and columns:
v Parameter file name. The name of the file containing the parameters.
The Expression Editor offers a list of the job parameters that have been defined.
The actual values for job parameters are specified in a separate file which is
uploaded to the mainframe with the job.
You can compare two objects of the same type. For example, you can compare a
parallel job with another parallel job. You can compare the following objects:
v Parallel jobs
v Server jobs
v Mainframe jobs
v Job sequences
v Parallel shared containers
v Server shared containers
v Parallel routines
v Server routines
v Mainframe routines
v Table definitions
v Data connections
You can compare two objects that are in the same project or compare two objects
that are in two different projects. For example, you can compare the table
definition named CA_address_data in the project named MK_team with the table
named Cal_addresses in the project named WB_team.
The Designer client displays descriptions of all the differences between the two
objects in a hierarchical tree structure. You can expand branches in the tree to view
the details. Click the underlined link in the description to view the changed object.
The details that are displayed depend on the types of object that you are
comparing. For example, the following picture shows the result of comparing two
table definitions.
If you compare objects that contain differences in multi-line text (for example,
source code in routines), the change tree displays a View button. Click View to
view the source code. By default the source is displayed in Notepad, but you can
The Designer client compares the two objects and displays the results in a window.
These results show what the differences are between the two objects.
Procedure
1. Select the first object that you want compare in the repository tree and then do
one of: the following tasks:
v Hold down the CTRL key and click the second object in the tree, then
right-click and select Compare selected.
v Right-click and select Compare against, and then browse for the object, select
it and click OK.
2. View the results in the window.
The Designer client compares two objects of the same type that are in different
projects and displays the results in a window. These results show what the
differences are between the two objects.
Procedure
1. In the repository tree, select the first object that you want to compare.
2. Right-click and select Cross Project Compare.
3. Click Attach.
4. In the “Attach to Project” window, type your user name and password, and
select the project that you want to attach to from the Project list.
5. Click OK to see a filtered view of the repository tree in the chosen project. The
filtered view displays only folders that contain eligible objects.
6. Browse the tree and select the object that you want to compare to the first
object.
7. Click OK.
The name of a table definition object must be specified as its data locator
name, not the name that is displayed in the repository tree. The data locator
name is displayed in the table definition properties window
If you specify only the domain and host name details, you are prompted for
the user name and password. (The password is hidden as you type in the
command window.) If you specify the domain, host name, and user name
details, you are prompted for the password..
right_side_connection_details
The connection details for the right side of the comparison. These details relate
to the second object that you want to compare. You must specify full
connection details only if you compare two objects in different projects.
Otherwise, you can specify only the object name. The syntax for the connection
details is the same as for the left_side_connection_details parameter.
difftype
The type of objects to compare. This parameter accepts any of the following
values:
After you run the diffapicmdline command, one of the following codes is
returned:
v 0 indicates a successful comparison
v 1 indicates an error in the command line
v 2 indicates that the client cannot compare the objects
The following command compares the exercise1 job with the new_exercise job .
Both jobs are located in the tutorial project. The output is sent to the
C:\compare_output.html file.
Find facilities
Use the find facilities to search for objects in the repository and to find where
objects are used by other objects.
The repository tree has extensive search capabilities. Quick find is available
wherever you can see the repository tree: in browse windows as well the Designer
client window. Advanced find is available from the Designer client window.
Quick find
Use the quick find feature to search for a text string in the name of an object, or in
its name and its description.
You can restrict your search to certain types of object, for example you can search
for certain jobs. You can keep the quick find window open at the top of your
repository tree in the Designer client window and you can access it from any
windows in which the repository tree is displayed.
Quick find supports the use of wildcards. Use an asterisk to represent zero or more
characters.
Procedure
1. Open the quick find window by clicking Open quick find in the top right
corner of the repository tree.
2. Enter the name to search for in the Name to find field. If you are repeating an
earlier search, click the down arrow in the name box and select your previous
search from the drop-down list.
3. Select the Include descriptions check box if you want the search to take in the
object descriptions as well as the names (this will include long and short
description if the object type has both).
4. If you want to restrict the search to certain types of object, select that object
types from the drop-down list in Types to find.
5. After specifying the search details, click Find.
Example
Here is an example of a quick find where you searched for any objects called copy,
and IBM InfoSphere DataStage found one job meeting this criteria. The job it found
is highlighted in the tree. InfoSphere DataStage reports that there are two matches
because it also found the copy stage object. If you wanted to search only jobs then
you could select jobs in the Types to find list. If you want to find any jobs with
copy in the title you could enter *copy* in the Name to find field and the search
If quick find locates several objects that match your search criteria, it highlights
each one in the tree, with the folders expanded as necessary. To view the next
object, click Next. You can click the n matches link to open the Advanced find
window and display all the found objects in the window.
Advanced find
You can use advanced find to carry out sophisticated searches.
Advanced find displays all the search results together in a window, independent of
the repository tree structure.
You can search on Name or Text in the object(s) descriptions. The other items are
additive; if you specify more than one, the objects have to meet all the additional
criteria before they are deemed to have satisfied the search conditions.
Procedure
1. Type a name in the Name field.
2. Optionally specify one or more additional search criteria.
Results
The names and details of the objects in the repository that match the search criteria
are listed in the Results - Details tab. You can select items in the list and perform
various operations on them from the shortcut menu. The available operations
depend on the object type, but include the following:
v Find dependencies and Find where used.
v Export. Opens the Repository Export window which allows you to export the
selected object in a .dsx or .xml file.
v Multiple job compile. Available for jobs and job sequences. Opens the multiple
job compiler tool.
v Edit. Available for jobs and job sequences. The selected job opens in the job
design canvas.
v Add to palette. The object is added to the palette currently open so that it is
readily available for future operations.
v Create copy. Creates a copy of the selected object named CopyOfObjectName.
v Rename. Use this operation to rename the selected object.
v Delete. Use this operation to delete the selected object.
v Properties. Opens the Properties window for the selected object. All objects have
associated properties giving detailed information about the object.
You can save the results of an advanced search operation as an XML file or as a
report. You can view a report in the reporting console.
The following example shows a search for object names that start with the word
exercise and include descriptions that contain the word tutorial. The results
show that four objects match these criteria:
Procedure
1. Select a table definition in the repository tree.
2. Right-click and select Find shared table definitions from the pop-up menu.
Results
You can search for column definitions within the table definitions that are stored in
the IBM InfoSphere DataStagerepository, or you can search for column definitions
in jobs and shared containers. You can narrow the search by specifying other
search criteria, such as folders to search in, when the column definition was
created or modified, or who created or modified the column definition.
Procedure
1. Type the name of the column that you want to find in the Name to find field.
You can use wildcard characters if you want to search for a number of columns
with similar names.
2. Select Columns in Table Definitions or Columns in Jobs or Shared
Containers in the Type list. Select both to search for the column in table
definitions and in jobs and shared containers.
3. Specify other search criteria as required.
4. Click Find to search for the column.
Impact analysis
Use the impact analysis features to discover where objects are used, and what
other objects they depend on.
The impact analysis features help you to assess the impact of changes you might
make to an object on other objects, or on job designs. For example, before you edit
a table definition, you could find the jobs that derive their column definitions from
the table definition and consider whether those jobs will need changing too.
You can run a where used query directly from the repository tree.
Procedure
1. Select the object in the repository tree.
2. Either right-click to open the pop-up menu or open the Repository menu from
the main menu bar.
3. Select Find where used > All types or Find where used (deep) > All types to
search for any type of object that uses your selected object.
4. Select Find where used > object type or Find where used (deep) > object type
to restrict the search to certain types of object.
Results
The search displays the results in the Repository Advanced Find window. The
results of a deep search show all the objects related to the ones that use your
search object.
Running a where used query from the Repository Advanced Find window:
About this task
You can run a where used query from the Repository Advanced Find window.
Procedure
1. Click the where used item in the left pane to open the where used tool.
2. Click Add.
3. In the Select items window, Browse for the object or objects that you want to
perform the where used query on and click OK.
4. You can continue adding objects to the where used window to create a list of
objects that you want to perform a where used query on.
5. Click Find when you are ready to perform the search. The results are displayed
in the Results - Details and the Results - Graphical tabs of the Repository
Advanced Find window.
6. View the results in the Repository Advanced Find window, click the Results -
Details tab to view the results as a text list, click the Results - graphical tab to
view a graphical representation of the results.
Results
If you view the results as a text list, note that the results only list the first three
dependency paths found for each object in the Sample dependency path field. To
view all the dependency paths for an object, right-click on the object and select
Show dependency path to object.
Results
The query runs and displays the jobs or shared containers that use the selected
column or columns.
Running where used queries from the results of an advanced find for a
column:
About this task
You run a query from the results of an advanced find for a column.
Procedure
1. Select one or more columns in the results pane of the Repository Advanced
Find window.
2. Right-click and select Where used from the pop-up menu.
Procedure
1. In the repository tree, select the table definition that contains the column to
query.
2. Right-click and select Find where column used from the pop-up menu.
3. In the table column window, select one or more columns.
4. Click OK.
Running where used queries from the Repository Advanced Find window:
About this task
Procedure
1. Open the Repository Advanced Find window.
2. Click the where used item in the left pane.
3. Click Add.
4. In the Select Items window, select the column or columns that you want to
query.
5. Click OK.
6. Click Find.
You can display the data lineage of any column that you have previously searched
for, or you can open a job or shared container and select a column for which you
want to view the data lineage. In the job or shared container, stages and links that
use the column are highlighted. Text is added to the link which explains how the
column is used. Data lineage cannot be displayed for jobs that use runtime column
propagation.
You can open a job or shared container and display the data lineage for a column
that you previously searched for.
Procedure
1. Right-click a job or a shared container in the Repository Advanced Find
window.
2. Select Show Data Flow (design time) from the pop-up menu.
You can enable data lineage highlighting from the Designer client menu.
Procedure
1. In the Designer client, open the job or shared container in which you want to
highlight the data lineage of a column.
2. Select Diagram > Configure data flow view
3. In the Data Flow Analysis window, select one of the Type of analysis options
to specify whether you want to highlight where data in the selected column
originates, or highlight where the data flows to, or both.
4. Click Add to open a browser window that shows all the columns used in the
job. Select the column or columns that you want to display data lineage for.
Alternatively, click Select All to select all the columns.
You can enable data lineage highlighting from the stage pop-up menu.
Procedure
1. In the Designer client, open the job or shared container in which you want to
highlight the data lineage of a column.
Chapter 10. Searching and impact analysis 171
2. Select the stage that you want to see the data lineage for.
3. Select one of the following items from the pop-up menu:
v Show where data flows to
v Show where data originates from
v Show where data flows to and originates from
4. In the browser window, select the column or columns that you want to display
data lineage for. Alternatively, click Select All to select all the columns.
Procedure
1. In the Designer client, ensure that the job or shared container for which you
want to turn off the data lineage highlighting is currently on top of the canvas
and selected.
2. Select Diagram > Show Data Flow (design time) to turn off the highlighting.
IBM InfoSphere DataStage performs the search and displays the results in the
Repository Advanced Find window. The results of a deep search show all the
objects related to the ones that depend on your search object.
You can run a dependencies of query directly from the repository tree.
Procedure
1. Select the object in the repository tree.
2. Either right click to open the pop-up menu or open the Repository menu from
the main menu bar.
3. Select Find dependencies > All types or Find dependencies (deep) > All types
to search for any type of object that your selected object depends on.
4. Select Find dependencies > object type or Find dependencies (deep) > object
type to restrict the search to certain types of object (the drop-down menu lists
all the types of object that the selected type of object can use).
You can run a dependencies of query from the Repository Advanced Find window.
Procedure
1. Click the dependencies of item in the left pane to open the where used tool.
2. Click Add , a Select Items window opens.
3. In the Select Items window, browse for the object or objects that you want to
perform the dependencies of query on and click OK. The selected object or
Results
If you view the results as a text list, note that the results only list the first three
dependency paths found for each object in the Sample dependency path field. To
view all the dependency paths for an object, right-click on the object and select
Show dependency path to object.
You can view the results in list form or as a graphical display. Use the tools in the
window tool bar to:
v Save the results to an XML file
v Save the results as an HTML report
v Zoom in and out of the graphical display
In this example, the data element called TestDataElement is selected, and then a
where used query is run. The results show that there are three table definitions
that use this data element. The Results - Details tab lists the table definition
objects:
You can select any of the objects from the list and perform operations on them
from the pop-up menu as described for advanced find.
Impact analysis queries also display results in graphical form, showing the
relationships of the objects found by the query. To view the results in graphical
form, click the Results - Graphics tab:
From this view you can invoke a view of the dependency path to a particular
object. Select that object in the graphical results viewer tab and choose Show
dependency path from ObjectName from the pop-up menu. This action opens
another tab, which shows the relationships between the objects in the job. Here is
the dependency path from the job to the table definition:
If an object has a plus sign (+) next to it, click the plus sign to expand the view:
The report is generated as an XML file. A style sheet is applied to the file to
convert it to HTML so that it can be displayed in an Interneet browser or the IBM
Information Server reporting console.
Procedure
1. Select File > Generate report.
2. Provide the following details in the Enter report details window:
a. A name for the report.
b. A description of the report.
c. Choose whether to use the default style sheet or to supply a custom style
sheet.
3. Click OK.
4. The report is generated and located in the reporting console folder. You can
view it using the reporting console or an Internet browser. Choose Tools >
Reporting Console to open the Reporting Console.
5. You can also generate an XML file from the report by selecting File > Generate
report and specify a file name and destination folder for the file.
IBM InfoSphere DataStage allows you to import and export components in order
to move jobs and other objects between InfoSphere DataStage development
systems. When you import or export jobs, InfoSphere DataStage ensures that all
the components required for a particular job are included in the import/export.
You can also use the export facility to generate XML documents which describe
objects in the repository. You can then use a browser such as Microsoft Internet
Explorer to view the document. XML is a markup language for documents
containing structured information. It can be used to publish such documents on
the Web. For more information about XML, visit the following Web sites:
v http://www.xml.com
v http://webdeveloper.com/xml
InfoSphere DataStage also allows you to import objects from outside InfoSphere
DataStage in order to leverage investment in other resources. Besides table
definitions, you can import ActiveX (OLE) functions to use in server jobs (or in
parallel jobs via a BASIC transformer or server shared container). You can similarly
import web services functions to use in routines. You can import metadata from a
variety of third party tools by using IBM InfoSphere Metadata Asset Manager.
Importing objects
The Designer allows you to import various objects into the repository:
v IBM InfoSphere DataStage components previously exported from other
InfoSphere DataStage projects (in proprietary format or in XML).
v External function definitions
v WebService function definitions
v Metadata by using the IBM InfoSphere Metadata Asset Manager
v Table definitions
v IMS definitions
You must copy the file from which you are importing to a directory you can access
from your local machine.
You can import components that support mainframe functionality only into an
InfoSphere DataStage system that has InfoSphere DataStage MVS Edition installed.
You should also ensure that the system to which you are importing supports the
required platform type.
There is a limit of 255 characters on object names. It is possible that exports from
earlier systems might exceed this limit.
When importing jobs or parameter sets with environment variable parameters, the
import adds that environment variable to the project definitions if it is not already
present. The value for the project definition of the environment variable is set to an
empty string. because the original default for the project is not known. If the
environment variable value is set to $PROJDEF in the imported component, the
import warns you that you need to set the environment variable value in the
project yourself.
Procedure
1. Choose Import > DataStage Components... to import components from a text
file or Import > DataStage Components (XML) ... to import components from
an XML file. The DataStage Repository Import window opens (the window is
slightly different if you are importing from an XML file, but has all the same
controls).
2. Type in the path or browse for the file to import from.
3. To import objects from the file into the repository, select Import all and click
OK. During import, you will be warned if objects of the same name already
exist in the repository and asked if you want to overwrite them. If you select
the Overwrite without query check box before importing you will not be
warned, and any existing objects will automatically be overwritten.
If you import job components, they are imported into the current project in the
Designer client.
4. To import selected components from the file into the repository, select Import
selected and click OK. The Import Selected dialog box appears. Select the
required items and click OK. The selected job components are imported into
the current project in the Designer.
5. To turn impact analysis off for this particular import, deselect the Perform
Impact Analysis checkbox. (By default, all imports are checked to see if they
are about to overwrite a currently used component, disabling this feature might
speed up large imports. You can disable it for all imports by changing the
impact analysis options).
There are two ways of importing from the command line. The dsimport command
is a windows application and requires user interaction with message boxes (to
import XML files in this way, use XML2DSX). The dscmdimport command is a
command line application and can be run unattended (this imports both DSX and
XML files).
You can also import objects from a .dsx file, by using the DSXImportService
command.
dsimport command
The dsimport command is a Microsoft Windows application that you use to import
InfoSphere DataStage components from a DSX file into the repository.
authfile
Name of the encrypted credentials file that contains the connection details.
domainURL
The full format URL for the domain to log on to. The URL includes the
protocol, host, and port information for the domain in this format:
https://domain:port. The port defaults to 9443 if it is not specified.
domain | domain:port_number
The host name of the services tier. This parameter can also have a port
number.
hostname
The host name for the InfoSphere Information Server engine where you are
importing a file to.
username
The user name to use for connecting to the services tier.
password
The password for the username that you are using to connect to the services
tier.
project | /ALL | /ASK
Specify a project to import the components to, or specify /ALL to import to all
projects, or specify /ASK to be prompted for the project to which to import.
dsx_pathname
The .dsx file to import from. You can specify multiple files if required.
If you do not use the /AF option and you want to run the command without any
user interaction, you must supply all of the connection details on the command
line. If all of the connection details are not supplied on the command line, the
Logon dialog is displayed. The Logon dialog is pre-filled with the values from the
command line. Any missing values are pre-filled with the values from your most
recent successful connection, except for the Password field which you must supply.
The password is not displayed as you type in the Logon dialog.
The following command imports the components in the jobs.dsx file into the
dstage1 project on the R101 server:
dsimport.exe /D=domain:9443 /U=wombat /P=w1ll1am dstage1 /H=R101
C:\temp\jobs.dsx
By default, if a job or parameter set uses environment variables that are not
defined in the target project, this command adds them for you during the import.
You can disable this behavior by using the NOENV flag. For an imported
environment variable parameter, the value that is set in the target job or parameter
set is taken from the export value, not the current project default value (if any).
When an environment variable parameter that you are exporting is set to the value
$PROJDEF, the definition is added to the project, but the value is empty. You must
dscmdimport command
The dscmdimport command is a command-line application that you run to import
InfoSphere DataStage components from a DSX file or an XML file into the
repository. You can run this command unattended.
Syntax
The dscmdimport command includes the following syntax. To view help options for
the command, enter dscmdimport at the command prompt.
dscmdimport /AF=authfile |
/URL=domainURL /H=hostname [/U=username [/P=password]] |
/D=domain /H=hostname [/U=username [/P=password]]
/NUA /NOENV project | /ALL | /ASK
pathname1 pathname2 ...
/V
authfile
Name of the encrypted credentials file that contains the connection details.
domain | domain:port_number
The host name of the services tier. This parameter can also have a port
number.
domainURL
The full format URL for the domain to log on to. The URL includes the
protocol, host, and port information for the domain in this format:
https://domain:port. The port defaults to 9443 if it is not specified.
hostname
The host name for the InfoSphere Information Server engine where you are
importing a file to.
username
The user name to use for connecting to the services tier.
password
The password for the username that you are using to connect to the services
tier.
NUA
Include this flag to disable usage analysis. If you are importing a large project,
enable this value.
NOENV
Include this flag to prevent the import from adding any environment variables
to the project definitions. Use this option if you want to add missing job
environment variable definitions to the project manually. If you omit this
option, by default, the import adds any missing environment variable
definitions to the project for you.
project | /ALL | /ASK
Specify a project to import the components to, or specify /ALL to import to all
projects, or specify /ASK to be prompted for the project to which to import.
pathname
The file to import from. You can specify multiple files if required. The files can
be .dsx files or .xml files, or a combination of both.
/V Use this flag to switch the verbose option on.
The following command imports the components from the file jobs.dsx into the
project dstage1 on the R101 server:
dscmdimport /D=domain:9443 /U=wombat /P=w1ll1am dstage1 /H=R101
C:\temp\jobs.dsx
Messages from the import are sent to the console by default, but can be redirected
to a file by using the greater than symbol (>). In the following example, the
messages from the import are sent to the C:\temp\importlog directory.
dscmdimport /D=domain:9443 /U=wombat /P=w1ll1am /H=R101
/NUA dstage99 C:\temp\project99.dsx
/V > C:\temp\importlog
By default, if a job or parameter set uses environment variables that are not
defined in the target project, this command adds them for you during the import.
You can disable this behavior by using the NOENV flag. For an imported
environment variable parameter, the value that is set in the target job or parameter
set is taken from the export value, not the current project default value (if any).
When an environment variable parameter that you are exporting is set to the value
$PROJDEF, the definition is added to the project, but the value is empty. You must
specify a project value for that environment variable after the import by using the
Administrator client or by using the dsadmin command.
XML2DSX command
The XML2DSX command is a Microsoft Windows application that you use to import
InfoSphere DataStage components from an XML file into the repository.
Syntax
If you do not use the /AF option and you want to run the command without any
user interaction, you must supply all of the connection details on the command
line. If all of the connection details are not supplied on the command line, the
Logon dialog is displayed. The Logon dialog is pre-filled with the values from the
command line. Any missing values are pre-filled with the values from your most
recent successful connection, except for the Password field which you must supply.
The password is not displayed as you type in the Logon dialog.
The following command imports the components in the jobs.xml file into the
dstage1 project on the R101 server:
XML2DSX.exe /D=domain:9443 /U=wombat /P=w1ll1am /N dstage1
/H=R101 C:\temp\jobs.xml
Procedure
1. Choose Import External Function Definitions... . The Import Transform
Functions Definitions wizard appears and prompts you to supply the pathname
of the file containing the transforms to be imported. This is normally a DLL file
which must have already been installed on the server machine.
2. Enter or browse for the pathname, then click Next. The wizard queries the
specified DLL file to establish what automation classes it contains and presents
these in a drop-down list.
Results
You can construct IBM InfoSphere DataStage routines from operations defined in
WSDL files. You can then use these routines in derivation expressions in server
jobs. For example, you could use one in a Transformer stage to determine how a
column value within a row is computed from the column values of the input rows.
Note: Before you can use the Import Web Service routine facility, you must first
have imported the metadata from the operation you want to derive the routine
from. See "Importing a Table Definition".
Procedure
1. Import the metadata for the web service operation. This is done using Import
Table Definitions Web Services WSDL Definitions (see "Importing a Table
Definition").
2. Choose Import Web Service Function Definitions... . The Web Service Browser
appears.
The upper right panel shows the web services whose metadata you have
loaded. Select a web service to view the operations available in the web service
in the upper left pane.
3. Select the operation you want to import as a routine. Information about the
selected web service is shown in the lower pane.
4. Either click Select this item in the lower pane, or double-click the operation in
the upper right pane. The operation is imported and appears as a routine in the
Web Services category under a category named after the web service.
Results
Once the routine is imported into the Repository, you can open the Server Routine
dialog box to view it and edit it. See "Viewing and Editing a Routine".
Click Import > Metadata Asset Manager... to launch the InfoSphere Metadata
Asset Manager in your default browser. For more information, see Importing and
sharing assets.
These facilities are available if you have InfoSphere DataStage MVS Edition
installed along with the IMS Source package.
You can import IMS definitions into the InfoSphere DataStage repository from Data
Base Description (DBD) files and Program Specification Block (PSB) files. A DBD
file defines the physical structure of an IMS database. A PSB file defines an
application's view of an IMS database.
During DBD import, the InfoSphere DataStage table name is created from the DBD
name and the segment name. Column names are created from the DBD field
names; however, only those fields that are defined in the DBD become columns.
Fillers are created where necessary to maintain proper field displacement and
segment size. If you have a definition of the complete IMS segment in the form of
a CFD, you can import it to create the completely defined table, including any
columns that were captured as fillers.
You can import IMS definitions from IMS version 5 and above. IMS field types are
converted to COBOL native data types during capture, as described in the table
below.
1
(n) is equal to the number of bytes.
Choose Import IMS Definitions Data Base Description (DBD)...to import a DBD
or Import IMS Definitions Program Specification Block (PSB)... to import a PSB.
The Import Metadata dialog box appears.
Exporting objects
The Designer allows you to export various objects from the repository. You can
export objects in text format or in XML.
You should be aware that, when you export objects that contain encrypted values,
any default value that is entered is held as encrypted text can be viewed in the
export file. So, for example, if you exported a job design where a database
connection password is given as a stage property, then the encrypted value of that
password is visible in the file. This is true whether you export to a dsx or an xml
file. The solution is to specify passwords and other encrypted properties as job
parameters with no default setting, and only give their value at run time.
Procedure
1. Select the objects that you want to export in the repository tree.
2. Do one of the following:
v Choose Export from the shortcut menu.
v Choose Repository > Export from the main menu.
The Repository Export dialog box appears, populated with the selected
items.
3. Use the Add, Remove, and Select all hyperlinks to change the selection if
necessary. Selecting Add opens a browse dialog box showing the repository
tree.
4. From the drop-down list, choose one of the following options to control how
any jobs you are exporting are handled:
v Export job designs with executables (where applicable)
v Export job designs without executables (this is the only option available
for XML export)
188 Designer Client Guide
v Export job executables without designs
5. Select the Exclude read-only objects check box to exclude such objects from
the export.
6. Select the Include dependent items check box to automatically include items
that your selected items depend upon.
7. Click the Options button to open the Export Options dialog box. This allows
you to change the exporter's default settings on the following:
Under the Default > General branch:
v Whether source code is included with exported routines (yes by default)
v Whether source code is included with job executables (no by default)
v Whether source content is included for data quality items.
Under the Default > Viewer branch:
v Whether the default viewer or specified viewer should be used (the default
viewer is the one Windows opens this type of file with, this is normally
Internet Explorer for XML documents, but you need to explicitly specify
one such as Notepad for .dsx files). Using the default viewer is the default
option.
Under the XML > General branch.
v Whether a DTD is to be included (no by default)
v Whether property values are output as internal values (which are numeric)
or as externalized strings (internal values are the default). Note that, if you
chose the externalized string option, you will not be able to import the file
that is produced.
Under the XML > Stylesheet branch:
v Whether an external stylesheet should be used (no by default) and, if it is,
the type and the file name and location of the stylesheet.
8. Specify the type of export you want to perform. Choose one of the following
from the drop-down list:
v dsx
v dsx 7-bit encoded
v legacy XML
9. Specify or select the file that you want to export to. You can click the View
button to look at this file if it already exists (this will open the default viewer
for this file type specified in Windows or any viewer you have specified in the
Export Options dialog box).
10. Select Append to existing file if you wanted the exported objects to be
appended to, rather than replace, objects in an existing file. (This is not
available for export to XML.)
11. Examine the list of objects to be exported to assure yourself that all the ones
you want to export have Yes in the Included column.
12. Click Export to export the chosen objects to the specified file.
Procedure
1. Choose Export > DataStage Components. The Repository Export dialog box
appears, it is empty (even if you have objects currently selected in the
repository tree).
There are two ways of doing this. The dsexport command is a windows
application and requires user interaction with message boxes. The dscmdexport
command is a command line application and can be run unattended. Both
commands are run from the InfoSphere DataStage client directory (by default
c:\IBM\InformationServer\Clients\Classic).
dscmdexport command
The dscmdexport command is a command-line application that you use to export
InfoSphere DataStage components to a file from the command line. You can run
this command unattended.
The dscmdexport command includes the following syntax. To view help options for
the command, enter dscmdexport at the command prompt.
dscmdexport /AF=authfile |
/URL=domainURL /H=hostname [/U=username [/P=password]] |
/D=domain /H=hostname [/U=username [/P=password]]
project pathname
/V
authfile
Name of the encrypted credentials file that contains the connection details.
domainURL
The full format URL for the domain to log on to. The URL includes the
protocol, host, and port information for the domain in this format:
https://domain:port. The port defaults to 9443 if it is not specified.
domain | domain:port_number
Name of the application server. This parameter can also have a port number.
hostname
Host name for the InfoSphere Information Server engine where you are
exporting the file to.
username
Name of the user that is connecting to the application server.
password
Password for the username that is connecting to the application server.
project
Name of the project that you are exporting components from.
pathname
Full path name of the file that you are exporting.
/V Flag that toggles whether the verbose option is used.
If you specify only the domain and host name details, you are prompted for the
user name and password. (The password is hidden as you type in the command
window.) If you specify the domain, host name, and user name details, you are
prompted for the password.
The following command exports the dstage2 project from the R101 server to the
dstage2.dsx file:
dscmdexport /D=domain:9443 /H=R101 /U=billg /P=paddock
dstage2 C:\temp\dstage2.dsx
Messages from the export are sent to the console by default, but can be redirected
to a file by using the greater than symbol (>). In the following example, the
messages from the export are sent to the C:\temp\exportlog directory:
dscmdexport /D=domain:9443 /H=R101 /U=billg /P=paddock dstage99
C:\temp\project99.dsx /V > C:\temp\exportlog
dsexport command
The dsexport command is a Microsoft Windows application that you use to export
InfoSphere DataStage components to a file. You can export an entire job by using
this command.
If you do not use the /AF option and you want to run the command without any
user interaction, you must supply all of the connection details on the command
line. If all of the connection details are not supplied on the command line, the
Logon dialog is displayed. The Logon dialog is pre-filled with the values from the
The following command exports the dstage2 project from the R101 server to the
dstage2.dsx file:
dsexport.exe /D=domain:9443 /H=R101 /U=billg /P=paddock
dstage2 C:\temp\dstage2.dsx
The job reporting facility allows you to generate an HTML report of a server,
parallel, sequence, or mainframe job or shared containers. You can view this report
in the Reporting Console or in a standard Internet browser (such as Microsoft
Internet Explorer) and print it from the browser.
The report contains an image of the job design followed by information about the
job or container and its stages. Hotlinks facilitate navigation through the report.
The following illustration shows the first page of an example report, showing the
job image and the contents list from which you can link to more detailed job
component descriptions:
Note: Job reports work best using Microsoft Internet Explorer 6, they might not
perform optimally with other browsers.
Important: If you generate a job report from the command line, the report is not
available for viewing through the reporting console.
Syntax
If you do not use the /AF option and you want to run the command without any
user interaction, you must supply all of the connection details on the command
line. If all of the connection details are not supplied on the command line, the
Logon dialog is displayed. The Logon dialog is pre-filled with the values from the
command line. Any missing values are pre-filled with the values from your most
recent successful connection, except for the Password field which you must supply.
The password is not displayed as you type in the Logon dialog.
The following command creates the ServerJob1.htm report and the ServerJob1.xml
intermediate file. Each of these files is saved in the C:\JobReports\ServerJob1
directory.
dsdesign /D=domain:9443 /H=R101 /U=william /P=wombat
dstage ServerJob1 /R /RP=C:\JobReports /RX
Server jobs and parallel jobs are compiled on the IBM InfoSphere DataStage server,
and are subsequently run on the server using the InfoSphere DataStage Director.
To compile a job, open the job in the Designer and do one of the following:
v Choose File Compile.
v Click the Compile button on the toolbar.
If the job has unsaved changes, you are prompted to save the job by clicking OK.
The Compile Job dialog box appears. This dialog box contains a display area for
compilation messages and has the following buttons:
v Re-Compile. Recompiles the job if you have made any changes.
v Show Error. Highlights the stage that generated a compilation error. This button
is only active if an error is generated during compilation.
v More. Displays the output that does not fit in the display area. Some errors
produced by the compiler include detailed BASIC output.
v Close. Closes the Compile Job dialog box.
v Help. Invokes the Help system.
The job is compiled as soon as this dialog box appears. You must check the display
area for any compilation messages or errors that are generated.
For parallel jobs there is also a force compile option. The compilation of parallel
jobs is by default optimized such that transformer stages only get recompiled if
they have changed since the last compilation. The force compile option overrides
this and causes all transformer stages in the job to be compiled. To select this
option:
v Choose File Force Compile
The following criteria in the job design are checked during compilation:
v Primary input. If you have more than one input link to a Transformer stage, the
compiler checks that one is defined as the primary input link.
v Reference input. If you have reference inputs defined in a Transformer stage, the
compiler checks that these are not from sequential files.
v Key expressions. If you have key fields specified in your column definitions, the
compiler checks whether there are key expressions joining the data tables.
Successful compilation
After a job was compiled successfully, you can begin working with the job.
If the Compile Job dialog box displays the message Job successfully compiled
with no errors. You can:
v Validate the job
v Run or schedule the job
v Release the job
v Package the job for deployment on other IBM InfoSphere DataStage systems
Syntax
Parameters
/? Specify this parameter to show a complete list of options.
/af
Specify the name of the credentials file that contains the connection details.
This file can be encrypted. The dscc command is one of the commands that
supports the authfile option.
For important formatting details about the credentials file, see The credentials
file.
/bo
Specify the buildop (custom parallel stage) to compile. Use * to specify all
buildops or \folderName\* to specify all the buildops in the folder named
folderName.
/d Specify the host name of the services tier. This parameter can also have a port
number in the form domain:port_number.
If you specify only the host name details, you are prompted for the user name and
password. (The password is hidden as you type in the command window.) If you
specify the host name and user name details, you are prompted for the password.
The Generated OSH page appears if you have selected the Generated OSH visible
option in the IBM InfoSphere DataStage Administrator.
Note: Mainframe jobs are not supported in this version of IBM InfoSphere
Information Server.
You can also generate code from the command line or using the compile wizard.
To generate code for a job, open the job in the Designer and do one of the
following:
v Choose File Generate Code.
v Click the Generate Code button on the toolbar.
If the job has unsaved changes, you are prompted to save the job by clicking OK.
The Mainframe Job Code Generation dialog box appears. This dialog box contains
details of the code generation files and a display area for compilation messages. It
has the following buttons:
v Generate. Click this to validate the job design and generate the COBOL code
and JCL files for transfer to the mainframe.
v View. This allows you to view the generated files.
v Upload job. This button is enabled if the code generation is successful. Clicking
it opens the Remote System dialog box, which allows you to specify a machine
to which to upload the generated code.
Status messages are displayed in the Validation and code generation status
window in the dialog box.
Job validation
Validate a mainframe job to check the stages and expressions used in the stages.
Code generation
Code generation first validates the job design. If the validation fails, code
generation stops.
Status messages about validation are in the Validation and code generation status
window. They give the names and locations of the generated files, and indicate the
database name and user name used by each relational stage.
Job upload
After you have successfully generated the mainframe code, you can upload the
files to the target mainframe, where the job is compiled and run.
To upload a job, choose File Upload Job. The Remote System dialog box appears,
allowing you to specify information about connecting to the target mainframe
system. Once you have successfully connected to the target machine, the Job
Upload dialog box appears, allowing you to actually upload the job.
JCL templates
IBM InfoSphere DataStage uses JCL templates to build the required JCL files when
you generate a mainframe job.
InfoSphere DataStage comes with a set of building-block JCL templates suitable for
various tasks.
The supplied templates are in a directory called JCL Templates under the engine
tier server install directory. There are also copies of the templates held in the
InfoSphere DataStage Repository for each InfoSphere DataStage project.
You can edit the templates to meet the requirements of your particular project. This
is done using the JCL Templates dialog box from the Designer. Open the JCL
Templates dialog box by choosing Tools > JCL Templates. It contains the following
fields and buttons:
v Platform type. Displays the installed platform types in a drop-down list.
v Template name. Displays the available JCL templates for the chosen platform in
a drop-down list.
v Short description. Briefly describes the selected template.
If there are system wide changes that will apply to every project, then it is possible
to edit the template defaults. Changes made here will be picked up by every
InfoSphere DataStage project on that InfoSphere DataStage engine tier. The JCL
Templates directory contains two sets of template files: a default set that you can
edit, and a master set which is read-only. You can always revert to the master
templates if required, by copying the read-only masters over the default templates.
Use a standard editing tool, such as Microsoft Notepad, to edit the default
templates.
Code customization
When you check the Generate COPY statement for customization box in the Code
generation dialog box, IBM InfoSphere DataStage provides four places in the
generated COBOL program that you can customize.
When you check Generate COPY statement for customization, four additional
COPY statements are added to the generated COBOL program:
v COPY ARDTUDAT. This statement is generated just before the PROCEDURE
DIVISION statement. You can use this to add WORKING-STORAGE variables or
a LINKAGE SECTION to the program.
v COPY ARDTUBGN. This statement is generated just after the PROCEDURE
DIVISION statement. You can use this to add your own program initialization
code. If you included a LINKAGE SECTION in ARDTUDAT, you can use this to
add the USING clause to the PROCEDURE DIVISION statement.
v COPY ARDTUEND. This statement is generated just before each STOP RUN
statement. You can use this to add your own program termination code.
v COPY ARDTUCOD. This statement is generated as the last statement in the
COBOL program. You use this to add your own paragraphs to the code. These
paragraphs are those which are PERFORMed from the code in ARDTUBGN and
ARDTUEND.
You can either preserve these members and create your own COPYLIB, or you can
create your own members in the InfoSphere DataStage runtime COPYLIB. If you
preserve the members, then you must modify the InfoSphere DataStage compile
and link JCL templates to include the name of your COPYLIB before the
InfoSphere DataStage runtime COPYLIB. If you replace the members in the
InfoSphere DataStage COPYLIB, you do not need to change the JCL templates.
You can start the wizard from the Tools menu of the Designer or Director clients.
Select Tools > Multiple Job Compile . You can also select multiple items in the
repository tree or Advanced Find window and use the shortcut menu to start the
compiler wizard.
Procedure
1. If you started the wizard from the Tools menu, a screen prompts you to specify
the criteria for selecting jobs to compile. Choose one or more of:
v Server
v Parallel
v Mainframe
v Sequence
v Custom server routines
v Custom parallel stage types
You can also specify that only currently uncompiled jobs will be compiled,
and that you want to manually select the items to compile.
2. Click Next>.
If you chose the Show manual selection page option, the Job Selection
Override screen appears. Choose jobs in the left pane and add them to the right
pane by using the Add buttons or remove them from the right pane by using
the Remove buttons. Clicking Add while you have a folder selected selects all
teh items in that folder and move them to the right pane. All the jobs in the
right pane will be compiled.
3. Click Next>, if you are compiling parallel or mainframe jobs, the Compiler
Options screen appears, allowing you to specify the following:
v Force compile (for parallel jobs).
v An upload profile for mainframe jobs you are generating code for.
4. Click Next>. The Compile Process screen appears, displaying the names of the
selected items and their current compile status.
5. Click Start Compile to start the compilation. As the compilation proceeds the
status changes from Queued to Compiling to Compiled OK or Failed and
details about each job are displayed in the compilation output window as it
compiles. Click the Cancel button to stop the compilation, although you can
only cancel between compilations so the Designer client might take some time
to respond.
6. Click Finish. If the Show compile report checkbox was selected the job
compilation report screen appears, displaying the report generated by the
compilation.
IBM InfoSphere DataStage includes a special type of job, known as a sequence job,
that you use to specify a sequence of parallel jobs or server jobs to run. You specify
the control information, such as the different courses of action to take depending
on whether a job in the sequence succeeds or fails. After you create a sequence job,
you schedule it to run using the InfoSphere DataStage Director client, just like you
would with a parallel job or server job. The sequence job is listed in the InfoSphere
DataStage repository and in the InfoSphere DataStage Director client.
Designing a sequence job is similar to designing a parallel job. You create the
sequence job in the Designer client and add stages. The stages that you add are
known as activities. When you link two activities together, you define the triggers
that define the flow of control. The job sequence includes properties and can have
parameters, which are passed to the activities in the sequence job.
Each activity also contains properties that are passed to other activities in the
sequence job. As with stages, you can define job parameters for each activity. You
can test the parameters in the trigger expressions, and pass the parameters to other
activities in the sequence job.
The following sequence job shows a sequence that runs the Demo job. If this
sequence job runs successfully, the success trigger causes the Overnightrun
sequence job to run. If the Demo job fails, the Failure trigger causes the Failure
sequence job to run.
Procedure
1. From the Designer client menu bar, click File > New. The New dialog box
appears.
2. Click the Job folder, then click Sequence Job.
3. Click OK.
4. From the Palette, add activities to your sequence job.
5. Specify the properties for each activity, such as the job or activity that occurs
when the activity is triggered.
6. Link the activities together.
7. Specify trigger information to define the actions that occur if the activity
succeeds or fails.
8. Save your sequence job. Sequence job names can be any length up to 255
characters, must begin with an alphabetic character, and can contain
alphanumeric characters and underscores. Spaces are not permitted.
Specifying triggers
Triggers determine what actions an activity takes when it runs as part of a
sequence job. You must link two activities together for the Triggers tab to be
available in the properties for an activity.
Activities can have one input trigger only, but can have multiple output triggers.
Sequence jobs must run for the associated trigger to activate. If a sequence job fails
to run, the associated trigger is not activated. For example, if a sequence job has a
state of Aborted, the triggers for the activities in the sequence job fail to activate.
Procedure
1. Create a link between two activities.
2. Change the link name to represent the action that is triggered. For example, an
activity might have two output links: one link to represent success and one link
to represent failure. You might name the first link success and the other link
failure.
3. Open the properties for the activity that you want to specify a trigger for.
4. In the Expression Type field, select the trigger for the selected activity.
Option Description
OK Run the target activity if the source activity
succeeds
Failed Run the target activity if the source activity
fails
ReturnValue Return a routine or command when the
activity runs
Warning Return a message if the activity produces
warnings
Custom Run a customized trigger for the activity,
which you define as an expression
UserStatus Return a customized status message and
write it to the log
Unconditional Run the target activity when the source
activity completes, regardless of whether
other activities run the same trigger
Otherwise Run the source activity, regardless of
whether all conditional triggers ran
successfully
5. Optional: If you selected Custom or UserStatus for the Expression Type, enter
the syntax for your expression in the Expression field.
6. Click OK to close the properties window for your activity.
Trigger types
When you link activities in a sequence job, you specify triggers to determine the
actions that occur when the activity runs. Each activity can output different trigger
types. Three types of triggers are available, with some types having subtypes.
Conditional
A conditional trigger runs the target activity if the source activity fulfills
the specified condition. The condition is defined by an expression, and can
be one of the following types:
Unconditional
An unconditional trigger runs the target activity once the source activity
completes, regardless of what other triggers are run from the same activity.
Otherwise
An otherwise trigger is used as a default where a source activity has
multiple output triggers, but none of the conditional triggers ran.
Use the built-in expression editor to ensure that your expressions are valid. The
following list includes the values that you can enter to create valid expressions for
your triggers.
v Literal strings enclosed in double-quotes or single-quotes
v Numeric constants (integer and floating point)
v Parameters for the sequence job
v Prior activity variables, such as job exit status
v All built-in BASIC functions in server jobs
v The following macros and constants:
– DSHostName
– DSJobController
– DSJobInvocationId
– DSJobName
– DSJobStartDate
– DSJobStartTime
– DSJobStartTimestamp
– DSJobWaveNo
– DSProjectName
v Arithmetic operators, such as the plus symbol (+), minus symbol (-),
multiplication symbol (*), and division symbol (/)
You can use variables when defining trigger expressions for Custom, ReturnValue,
and UserStatus conditional triggers. The following table describes the variables
that you can use for each of the supported activity types.
In the table, activity_stage is name of the activity. You can also use the job
parameters from the sequence job.
Custom triggers in Nested Condition and Sequencer activities can use any of the
variables in the above table used by the activities connected to them.
Important: When you enter valid variable names in an expression, such as a job
parameter name or job exit status, do not delimit them with the hash symbol (#).
Table 5. Variables used in defining trigger expressions for different activity types
Activity type Variable Use
Exception Handler (these are activity_stage.$ErrMessage The text of the message that
available for use in the will be logged as a warning
sequence of activities the when the exception occurs.
stage initiates, not the stage
activity_stage.$ErrNumber Returns an error code that
itself)
indicates the reason that the
Exception Handler activity
was invoked:
1 The activity ran a job,
but it aborted because
no specific handler was
configured
-1 The job failed to run for
an unspecified reason
activity_stage.$ErrSource The stage label of the activity
that triggered the exception.
For example, an activity
stage called a job that failed
to run.
Execute Command activity_stage.$CommandName The name of the command
that the activity ran,
including the path name if
one was specified.
activity_stage.$CommandOutputThe output captured from
running the command.
activity_stage.$ReturnValue The command status.
You can also specify that the sequence job handle automatically any errors that
occur when the sequence job runs. By handling errors automatically, you do not
have to specify an error handling trigger for every activity in the sequence job. You
can enable or disable this setting on a project-wide basis, or for individual
sequence jobs.
Procedure
1. Ensure that the job is able to be restarted. If a sequence job is restartable, and
one of its jobs fail during a run, the following status appears in the Director
client:
Aborted/restartable
2. Choose one of the following actions for your sequence job.
Values for parameters are collected when you run the sequence job. The
parameters that you define are available to all activities in your sequence job, so all
available parameters are listed in this page. For example, if you are scheduling
three jobs, each of which requires an input file at run time, you can specify a
distinct parameter for each of the input files. You then edit the Job for each activity
to use the parameters that you created. When you run the sequence job, the Job
Run Options window opens, prompting you to enter values for each parameter.
The appropriate file name is then passed to each job as it runs.
Procedure
1. Open the job that you want to define parameters for.
2. Click Edit > Job Properties to open the Job Properties window.
3. Click the Parameters tab.
4. Enter the following information for the parameter that you are creating. Each
parameter represents a source file or a directory.
Parameter name
The name of the parameter.
Prompt
The text that displays for this parameter when you run the job.
Type
The type of parameter that you are creating, which can be one of the
following values:
Procedure
1. Open the job that you want to define environment variables for.
2. Press Ctrl + J to open the Job Properties window.
3. Click the Parameters tab.
4. In the lower right of the Parameters page, click Add Environment Variable.
The Choose environment variable window opens to display a list of the
available environment variables.
General page
Use the General page to specify default parameters for your sequence job.
The following compilation options specify details about restarting the sequence job
if one of the jobs fails.
Add checkpoints so sequence is restartable
Enables this job to be restarted upon failure. If you enable this feature on a
project-wide basis in the InfoSphere DataStage Administrator client, this option
is selected by default when the sequence job is created.
Important: When using this feature, avoid using any routines within the
sequence job that return any value other than zero to indicate success. In
InfoSphere DataStage, non-zero values indicate a failure.
For each activity that does not have a specific trigger for error handling, code
is inserted that branches to an error handling point. If the compiler inserts
code to handle errors, the following sequence of actions occur if a job within
the job sequence fails:
v A warning is logged to indicate that the job finished with warnings.
v If the sequence job has an exception handler defined, the code uses that
exception handler.
v If there no exception handler is defined, the sequence job fails with a
message.
Log warnings after activities that finish with status other than OK
Generates a message in the sequence log if a job in the sequence job finishes
with a non-zero completion code (for example, warnings or fatal errors).
Messages are also logged for routine or command activities that fail.
Log report messages after each job run
Generates a status report for a job immediately after the sequence job runs. The
following example shows the type of information that is included in the report:
**************************************************
STATUS REPORT FOR JOB: jobname
Generated: 2003-10-31 16:13:09
Job start time=2003-10-31 16:13:07
Job end time=2003-10-31 16:13:07
Job elapsed time=00:00:00
Job status=1 (Finished OK)
Stage: stagename1, 10000 rows input
Stage start time=2003-10-31 16:17:27, end time=2003
-10-31 16:17:27, elapsed=00:00:00
Link: linkname1, 10000 rows
Stage: stagename2, 10000 rows input
Stage start time=2003-10-31 16:17:28, end time=2003
-10-31 16:17:28, elapsed=00:00:00
Link: linkname2, 10000 rows
Link: linkname3, 10000 rows
Parameters page
Use the Parameters page to specify parameters, parameter sets, and environment
variables for your sequence job.
You can refer to the parameters in the job sequence by name. When you are
entering an expression, you just enter the name directly. Where you are entering a
parameter name in an ordinary single-line text box, you need to delimit the name
with hash symbols, for example: #dayofweek#.
This page includes information that is used to diagnose problems with sequence
jobs.
Dependencies page
Use the Dependencies page to view the dependencies of the sequence job, such as
functions, routines, or jobs that the sequence job runs.
Listing the dependencies of the sequence job here ensures that all required
components are included if the sequence job is packaged for use on another
system.
To add an activity to your job sequence, drag the corresponding icon from the
Palette to the sequence job canvas. After you design your sequence job by adding
activities and triggers, you define the properties for each activity. The properties
that you define control the behavior of the activity.
To view the properties of an activity, open the activity from the Designer client
canvas. The properties of each activity depend on the type of activity that you
work with. All activities have a General tab, and any activities that contain output
triggers have a Triggers tab.
You can also add jobs or routines to your design as activities by dragging the
associated icon from the Designer client Repository and dropping it on your
sequence job canvas.
The pages that are available for each activity vary depending on the type.
However, all activities have a General page, and any activities with output triggers
have a Triggers page.
General
All the stages in between the Start Loop and End Loop stages are included in that
loop (as illustrated in Examples of Using Loop Stages). You draw a link back to the
Start Loop activity stage that this stage is paired with.
The ExecCommand stage contains the following fields in addition to the General
page and the Triggers page:
Command
The full pathname of the command to execute. This can be an operating
system command, a batch command file, or an executable file. You can use a
job parameter so that you can specify the actual command at run time.
Parameters entered in this field needs to be delimited with hashes (#).
Parameters selected from the External Parameter Helper will automatically be
enclosed in hash symbols. You can browse for a command or job parameter.
Parameters
Allows you to pass parameters to the command. These should be entered in
the format that the command expects them. You can also specify a parameter
whose value will be specified at run time. Click the Browse button to open the
External Parameter Helper, which shows you all parameters available at this
point in the job sequence. Parameters entered in these fields need to be
delimited with hashes (#). Parameters selected from the External Parameter
Helper will automatically be enclosed in hash symbols.
You can prefix the command parameters with the string /NOLOG/ to prevent
the parameters being displayed in the job log. This feature is useful where
your parameters include passwords or other confidential information.
An exception activity can only have a single unconditional output trigger, so does
not require a Triggers page. It has no input triggers. It serves as a starting point for
a sequence of activities to run if an exception has occurred somewhere in the main
sequence. Its Properties dialog box contains only a General page.
There are some exception handler variables that can be used in the sequence of
activities that the Exception Activity stage initiates. These are:
v stage_label.$ErrSource. This is the stage label of the activity stage that raised the
exception (for example, the job activity stage calling a job that failed to run).
v stage_label.$ErrNumber. Indicates the reason the Exception Handler activity was
invoked, and is one of:
– 1. Activity ran a job but it aborted, and there was no specific handler set up.
– -1. Job failed to run for some reason.
v stage_label.$ErrMessage. The text of the message that will be logged as a warning
when the exception is raised.
Important: This field is displayed only if the job identified by the Job name
property has the Allow Multiple Instance option enabled. You cannot leave
the Invocation Id Expression field blank.
Execution Action
Use this option to specify what action the activity takes when the job runs.
Choose one of the following options from the list:
v Run (the default)
Each nested condition can have one input trigger and typically has multiple output
triggers. For example, you could use a Nested Condition stage to implement the
following control sequence.
Load/init jobA
Run jobA
If ExitStatus of jobA = OK then /*tested by trigger*/
If Today = "Wednesday" then /*tested by nested condition*/
run jobW
If Today = "Saturday" then
run jobS
Else
run JobB
You specify the conditions that determine the sequence of actions for each output
triggers in the Triggers page. For example, you might link an ExecCommand stage
to your Nested Condition stage. In the Triggers page of the Nested Condition
stage, you enter the following expression.
DayCommand.$CommandOutput = "Wednesday"
You can specify a parameter whose value you indicate at run time for the SMTP
Mail server name field, Senders email address field, Recipients email address
field, and Email subject field. Click Browse to open the External Parameter
Helper, which shows you all parameters available at this point in the job sequence.
Parameters entered in these fields need to be delimited with hashes (#). Parameters
that you select from the External Parameter Helper are automatically enclosed in
hash symbols.
Attention: This stage can be installed on Red Hat Linux 64-bit systems only.
You can access routine arguments in the activity triggers in the form
routinename.argname. Such arguments can be also be accessed by other activities
that occur subsequently in the job sequence. This is most useful for accessing an
output argument of a routine, but note that BASIC makes no distinction between
input and output arguments, it is up to you to establish which is which.
When you select the icon representing a routine activity, you can choose Open
Routine from the shortcut menu to open the Routine dialog box for that routine
ready to edit.
The Sequencer stage contains a General page and a Sequencer page. In the
Sequencer page, you select the operation mode for the Sequencer stage:
You can also right-click the Sequencer stage to change the mode. The Sequencer
stage icon changes slightly depending on the mode that you select.
The following is a job sequence that synchronizes the running of a job to the
successful completion of three other jobs. The sequencer mode is set to All. When
job1, job2, and job3 have all finished successfully, the sequencer will start jobfinal
(if any of the jobs fail, the corresponding terminator stage will end the job
sequence).
The following is section of a similar job sequence, but this time the sequencer
mode is set to Any. When any one of the Wait_For_File or job activity stages
In addition to the General and Triggers pages, the Properties dialog box for a Start
Loop activity contains a Start Loop page.
You can have a numeric loop (where you define a counter, a limit, and an
increment value), or a List loop (where you perform the loop once for each item in
a list). You can pass the current value of the counter as a parameter into the stages
within your loop in the form stage_label.$Counter, where stage_label is the name of
the Start Loop activity stage as given in the Diagram window. You mark the end of
a loop with an End Loop Activity stage, which has a link drawn back to its
corresponding Start Loop stage.
You define the loop setting in the Start Loop page. The page contains:
v Loop type. Choose Numeric to implement a For...Next type loop, or List to
implement a For...Each type loop.
You can use parameters for any of these, and specify actual values at run time. You
can click the Browse button to open the External Parameter Helper, which shows
you all parameters available at this point in the job sequence. Parameters entered
in this field needs to be delimited with hashes (#). Parameters selected from the
External Parameter Helper will automatically be enclosed in hash symbols.
The following is a section of a job sequence that makes repeated attempts to run a
job until either it succeeds, or until the loop limit is reached. The job depends on
network link that is not always available.
The Start Loop stage, networkloop, has Numeric loop selected and the properties
are set as follows:
This defines that the loop will be run through up to 1440 times. The action differs
according to whether the job succeeds or fails:
The following is a section of a job sequence that makes use of the loop stages to
run a job repeatedly to process results for different days of the week:
The Start Loop stage, dayloop, has List loop selected and properties are set as
follows:
The job processresults will run five times, for each iteration of the loop the job is
passed a day of the week as a parameter in the order monday, tuesday, wednesday,
thursday, friday. The loop is then exited and control passed to the next activity in
the sequence.
The Terminator Properties dialog box has a General page and Terminator page. It
cannot have output links, and so has no Triggers page. The stage can have one
input.
You can have multiple Terminator activities and can place them anywhere in the
sequence. They are connected to other stages by triggers, which specify when a
terminator will be invoked.
These variables can then be used elsewhere in the sequence, for example to set job
parameters. Variables are used in the form stage_label.parameter_name, where
stage_label is the name of the User Variable activity stage as given in the Diagram
window.
The values of the user variables are set by expressions in the stage's properties.
(For details, see "Expressions".)
You would most likely start a sequence with this stage, setting up the variables so
they can be accessed by subsequent sequence activities. The exit trigger would
initiate the sequence proper. You can also use a User Variable activity further into a
sequence to change the value of a variable previously defined by an earlier User
Variable activity.
The variables are defined in the Properties page for the stage. To add a variable:
v Choose Add Row from the shortcut menu.
v Enter the name for your variable.
v Supply an expression for resolving the value of the variable.
In this example, the expression editor has picked up a fault with the definition.
When fixed, this variable will be available to other activity stages downstream as
MyVars.VAR1.
A job control routine provides the means of controlling other jobs from the current
job. A set of one or more jobs can be validated, run, reset, stopped, and scheduled
in much the same way as the current job can be. You can, if required, set up a job
whose only function is to control a set of other jobs. The graphical job sequence
editor produces a job control routine when you compile a job sequence (you can
view this in the Job Sequence properties), but you can set up you own control job
by entering your own routine on the Job control page of the Job Properties dialog
box. The routine uses a set of BASIC functions provided for the purpose. The Job
control page provides a basic editor to let you construct a job control routine using
the functions.
The toolbar contains buttons for cutting, copying, pasting, and formatting code,
and for activating Find (and Replace). The main part of this page consists of a
multiline text box with scroll bars. The Add Job button provides a drop-down list
box of all the server and parallel jobs in the current project. When you select a
compiled job from the list and click Add, the Job Run Options dialog box appears,
allowing you to specify any parameters or run-time limits to apply when the
selected job is run. The job will also be added to the list of dependencies. When
you click OK in the Job Run Options dialog box, you return to theJob control page,
where you will find that IBM InfoSphere DataStage has added job control code for
the selected job. The code sets any required job parameters or limits, runs the job,
waits for it to finish, then tests for success.
Alternatively, you can type your routine directly into the text box on the Job
control page, specifying jobs, parameters, and any runtime limits directly in the
code.
The following is an example of a job control routine. It schedules two jobs, waits
for them to finish running, tests their status, and then schedules another one. After
the third job has finished, the routine gets its finishing status.
* get a handle for the first job
Hjob1 = DSAttachJob("DailyJob1",DSJ.ERRFATAL)
* set the job’s parameters
Dummy = DSSetParam(Hjob1,"Param1","Value1")
* run the first job
Dummy = DSRunJob(Hjob1,DSJ.RUNNORMAL)
* get a handle for the second job
Hjob2 = DSAttachJob("DailyJob2",DSJ.ERRFATAL)
* set the job’s parameters
Dummy = DSSetParam(Hjob2,"Param2","Value2")
* run the second job
Dummy = DSRunJob(Hjob2,DSJ.RUNNORMAL)
* Now wait for both jobs to finish before scheduling the third job
Dummy = DSWaitForJob(Hjob1)
Dummy = DSWaitForJob(Hjob2)
* Test the status of the first job (failure causes routine to exit)
J1stat = DSGetJobInfo(Hjob1, DSJ.JOBSTATUS)
If J1stat = DSJS.RUNFAILED
Then Call DSLogFatal("Job DailyJob1 failed","JobControl")
End
* Test the status of the second job (failure causes routine to
Jobs that are not running might have the following statuses:
v DSJS.RUNOK - Job finished a normal run with no warnings.
v DSJS.RUNWARN - Job finished a normal run with warnings.
v DSJS.RUNFAILED - Job finished a normal run with a fatal error.
v DSJS.VALOK - Job finished a validation run with no warnings.
v DSJS.VALWARN - Job finished a validation run with warnings.
v DSJS.VALFAILED - Job failed a validation run.
v DSJS.RESET - Job finished a reset run.
v DSJS.STOPPED - Job was stopped by operator intervention (cannot tell run
type).
If a job has an active select list, but then calls another job, the second job will
effectively wipe out the select list.
These topics describe the tools that are in the Designer client.
Intelligent assistants
IBM InfoSphere DataStage provides intelligent assistants which guide you through
basic InfoSphere DataStage tasks.
Procedure
1. In the New window, select Assistants > New Template from Job or use the
toolbar. A dialog box appears which allows you to browse for the job you want
to create the template from. All the server, parallel, and mainframe jobs in your
current project are displayed. Since job sequences are not supported, they are
not displayed.
2. Select the job to be used as a basis for the template. Click OK. Another dialog
box appears in order to collect details about your template.
3. Enter a template name, a template category, and an informative description of
the job captured in the template. The restrictions on the template name and
category should follow Windows naming restrictions. The description is
displayed in the dialog for creating jobs from templates. Press OK. The
Template-From-Job Assistant creates the template and saves it in the template
directory specified during installation. Templates are saved in XML notation.
4. Enter a template name, a template category, and an informative description of
the job captured in the template. The restrictions on the template name and
category should follow Windows naming restrictions. The description is
displayed in the dialog for creating jobs from templates. Press OK.
Results
The Template-From-Job Assistant creates the template and saves it in the template
directory specified during installation. Templates are saved in XML notation.
Administrating templates
To delete a template, start the Job-From-Template Assistant and select the template.
Click the Delete button. Use the same procedure to select and delete empty
categories.
The Assistant stores all the templates you create in the directory you specified
during your installation of IBM InfoSphere DataStage. You browse this directory
when you create a new job from a template. Typically, all the developers using the
Designer save their templates in this single directory.
After installation, no dialog is available for changing the template directory. You
can, however change the registry entry for the template directory. The default
registry value is:
[HKLM/SOFTWARE/Ascential Software/DataStage Client/currentVersion/
Intelligent Assistant/Templates]
Procedure
1. In the New window, select Assistants > New Job from Template or use the
toolbar. A dialog box appears which allows you to browse for the template you
want to create the job from.
2. Select the template to be used as the basis for the job. All the templates in your
template directory are displayed. If you have custom templates authored by
Consulting or other authorized personnel, and you select one of these, a further
dialog box prompts you to enter job customization details until the Assistant
has sufficient information to create your job.
3. When you have answered the questions, click Apply. You can cancel at any
time if your are unable to enter all the information. Another dialog appears in
order to collect the details of the job you are creating:
4. Enter a new name and folder for your job. The restrictions on the job name
should follow IBM InfoSphere DataStage naming restrictions (that is, job names
should start with a letter and consist of alphanumeric characters).
5. Select OK.
Results
InfoSphere DataStage creates the job in your project and automatically loads the
job into the Designer.
You can read from and write to data sets, sequential files, and all the databases
supported by parallel jobs.
Results
All jobs consist of one source stage, one transformer stage, and one target stage.
Data sets are operating system files, each referred to by a descriptor file, usually
with the suffix .ds.
These files are stored on multiple disks in your system. A data set is organized in
terms of partitions and segments. Each partition of a data set is stored on a single
processing node. Each data segment contains all the records written by a single
IBM InfoSphere DataStage job. So a segment can contain files from many
partitions, and a partition has files from many segments.
The descriptor file for a data set contains the following information:
v Data set header information.
v Creation time and date of the data set.
v The schema of the data set.
v A copy of the configuration file use when the data set was created.
Procedure
1. Choose Tools Data Set Management, a Browse Files dialog box appears:
2. Navigate to the directory containing the data set you want to manage. By
convention, data set files have the suffix .ds.
3. Select the data set you want to manage and click OK. The Data Set Viewer
appears. From here you can copy or delete the chosen data set. You can also
view its schema (column definitions) or the data it contains.
Partitions
The partition grid shows the partitions the data set contains and describes their
properties.
v #. The partition number.
v Node. The processing node that the partition is currently assigned to.
v Records. The number of records the partition contains.
v Blocks. The number of blocks the partition contains.
v Bytes. The number of bytes the partition contains.
Segments
Click on an individual partition to display the associated segment details.
Click the Refresh button to reread and refresh all the displayed information.
Click the Output button to view a text version of the information displayed in the
Data Set Viewer.
You can open a different data set from the viewer by clicking the Open icon on the
tool bar. The browse dialog open box opens again and lets you browse for a data
set.
Click OK to view the selected data, the Data Viewer window appears.
The Copy data set dialog box appears, allowing you to specify a path where the
new data set will be stored.
The new data set will have the same record schema, number of partitions and
contents as the original data set.
Note: You cannot use the UNIX cp command to copy a data set because IBM
InfoSphere DataStage represents a single data set with multiple files.
Note: You cannot use the UNIX rm command to remove a data set because IBM
InfoSphere DataStage represents a single data set with multiple files. Using rm
simply removes the descriptor file, leaving the much larger data files behind.
One of the great strengths of the IBM InfoSphere DataStage is that, when designing
parallel jobs, you don't have to worry too much about the underlying structure of
your system, beyond appreciating its parallel processing capabilities. If your
system changes, is upgraded or improved, or if you develop a job on one platform
and implement it on another, you do not necessarily have to change your job
design.
InfoSphere DataStage learns about the shape and size of the system from the
configuration file. It organizes the resources needed for a job according to what is
defined in the configuration file. When your system changes, you change the file
not the jobs. You can maintain multiple configuration files and read them into the
system according to your varying processing needs.
Procedure
1. To use the editor, choose Tools > Configurations. The Configurations dialog
box appears.
2. To define a new file, choose (New) from the Configurations list and type into
the upper text box.
3. Click Save to save the file at any point. You are asked to specify a configuration
name, the config file is then saved under that name with an .apt extension.
4. You can verify your file at any time by clicking Check. Verification information
is output in the Check Configuration Output pane at the bottom of the dialog
box.
Results
What are message handlers? When you run a parallel job, any error messages and
warnings are written to an error log and can be viewed from the Director client.
You can choose to handle specified errors in a different way by creating one or
more message handlers.
A message handler defines rules about how to handle messages generated when a
parallel job is running. You can, for example, use one to specify that certain types
of message should not be written to the log.
You can edit message handlers in the Designer or in the IBM InfoSphere DataStage
Director. The recommended way to create them is by using the Add rule to
message handler feature in the Director client.
When the job runs it will look in the local handler (if one exists) for each message
to see if any rules exist for that message type. If a particular message is not
handled locally, it will look to the project-wide handler for rules. If there are none
there, it writes the message to the job log.
Note message handlers do not deal with fatal error messages, these will always be
written to the job log.
You can view, edit, or delete message handlers from the Message Handler
Manager. You can also define new handlers if you are familiar with the message
IDs (although InfoSphere DataStage will not know whether such messages are
warnings or informational). The preferred way of defining new handlers is by
using the add rule to message handler feature.
The file is stored in the folder called MsgHandlers in the engine tier install
directory. The following is an example message file.
TUTL 000031 1 1 The open file limit is 100; raising to 1024...
TFSC 000001 1 2 APT configuration file...
TFSC 000043 2 3 Attempt to Cleanup after ABORT raised in stage...
Each line in the file represents message rule, and comprises four tab-separated
fields:
v Message ID . Case-specific string uniquely identifying the message
v Type. 1 for Info, 2 for Warn
v Action . 1 = Suppress, 2 = Promote, 3 = Demote
v Message . Example text of the message
JCL templates
IBM InfoSphere DataStage uses JCL templates to build the required JCL files when
you generate a mainframe job. InfoSphere DataStage comes with a set of
building-block JCL templates suitable for a variety of tasks.
The supplied templates are in a directory called JCL Templates under the engine
tier install directory. There are also copies of the templates held in the InfoSphere
DataStage Repository for each InfoSphere DataStage project.
The JCL Templates dialog box contains the following fields and buttons:
v Platform type. Displays the installed platform types in a drop-down list.
v Template name. Displays the available JCL templates for the chosen platform in
a drop-down list.
v Short description. Briefly describes the selected template.
v Template. The code that the selected template contains.
v Save. This button is enabled if you edit the code, or subsequently reset a
modified template to the default code. Click Save to save your changes.
v Reset. Resets the template code back to that of the default template.
If there are system wide changes that will apply to every project, then it is possible
to edit the template defaults. Changes made here will be picked up by every
InfoSphere DataStage project on that InfoSphere DataStage server. The JCL
Templates directory contains two sets of template files: a default set that you can
edit, and a master set which is read-only. You can always revert to the master
templates if required, by copying the read-only masters over the default templates.
Use a standard editing tool, such as Microsoft Notepad, to edit the default
templates.
Create a repository tree and tree objects to help manage the objects in the
repository.
Repository tree
You can use the Repository tree to browse and manage objects in the repository.
The Designer enables you to store the following types of object in the repository
(listed in alphabetical order):
v Data connections
v Data elements (see Data Elements)
v IMS Database (DBD) (see IMS Databases and IMS Viewsets)
v IMS Viewset (PSB/PCB) (see IMS Databases and IMS Viewsets)
v Jobs (see Developing a Job)
v Machine profiles (see Machine Profiles)
v Match specifications
v Parameter sets (see Parameter Sets)
v Routines (see Parallel Routines,Server Routines , and Mainframe Routines)
v Shared containers (see Shared Containers)
v Stage types (see reference guides for each job types and Custom Stages for
Parallel Jobs)
v Table definitions (see Table Definitions)
v Transforms
When you first open IBM InfoSphere DataStage you will see the repository tree
organized with preconfigured folders for these object types, but you can arrange
your folders as you like and rename the preconfigured ones if required. You can
store any type of object within any folder in the repository tree. You can also add
your own folders to the tree. This allows you, for example, to keep all objects
relating to a particular job in one folder. You can also nest folders one within
another.
Click the blue bar at the side of the repository tree to open the detailed view of the
repository. The detailed view gives more details about the objects in the repository
tree and you can configure it in the same way that you can configure a Windows
Explorer view. Click the blue bar again to close the detailed view.
You can view a sketch of job designs in the repository tree by hovering the mouse
over the job icon in the tree. A tooltip opens displaying a thumbnail of the job
design. (You can disable this option in the Designer Options.)
Procedure
1. Start the Designer client. The New dialog box is displayed.
2. Open the folder for the type of object you want to create.
3. Select the exact type of object you require in the right pane.
4. Click OK.
5. If the object requires more information from you, another dialog box will
appear to collect this information. The type of dialog box depends on the type
of object (see individual object descriptions for details).
6. When you have supplied the required details, click OK. The Designer asks you
where you want to store the object in the repository tree.
Procedure
1. Select the folder that you want the new object to live in, right-click and choose
the type of object required from the shortcut menu.
2. If the object requires more information from you, another dialog box will
appear to collect this information. The type of dialog box depends on the type
of object (see individual object descriptions for details).
The new icon in the toolbar allows you to create the following types of object:
v Job sequence
v Mainframe job
v Parallel job
v Parallel shared container
v Server job
v Server shared container
You can also start the Intelligent Assistant from this menu to help you design
certain types of job (see "Creating Jobs Using Intelligent Assistants" on page 3-31).
Procedure
1. Click on the New icon in the toolbar.
2. Choose the desired object type from the drop-down menu:
3. A new instance of the selected job or container type opens in the Designer, and
from here you can save it anywhere you like in the repository.
The IBM InfoSphere Information Server product modules and user interfaces are
not fully accessible.
For information about the accessibility status of IBM products, see the IBM product
accessibility information at http://www.ibm.com/able/product_accessibility/
index.html.
Accessible documentation
The documentation that is in the information center is also provided in PDF files,
which are not fully accessible.
See the IBM Human Ability and Accessibility Center for more information about
the commitment that IBM has to accessibility.
The following table lists resources for customer support, software services, training,
and product and solutions information.
Table 6. IBM resources
Resource Description and location
IBM Support Portal You can customize support information by
choosing the products and the topics that
interest you at www.ibm.com/support/
entry/portal/Software/
Information_Management/
InfoSphere_Information_Server
Software services You can find information about software, IT,
and business consulting services, on the
solutions site at www.ibm.com/
businesssolutions/
My IBM You can manage links to IBM Web sites and
information that meet your specific technical
support needs by creating an account on the
My IBM site at www.ibm.com/account/
Training and certification You can learn about technical training and
education services designed for individuals,
companies, and public organizations to
acquire, maintain, and optimize their IT
skills at http://www.ibm.com/training
IBM representatives You can contact an IBM representative to
learn about solutions at
www.ibm.com/connect/ibm/us/en/
IBM Knowledge Center is the best place to find the most up-to-date information
for InfoSphere Information Server. IBM Knowledge Center contains help for most
of the product interfaces, as well as complete documentation for all the product
modules in the suite. You can open IBM Knowledge Center from the installed
product or from a web browser.
Tip:
To specify a short URL to a specific product page, version, or topic, use a hash
character (#) between the short URL and the product identifier. For example, the
short URL to all the InfoSphere Information Server documentation is the
following URL:
http://ibm.biz/knowctr#SSZJPZ/
And, the short URL to the topic above to create a slightly shorter URL is the
following URL (The ⇒ symbol indicates a line continuation):
http://ibm.biz/knowctr#SSZJPZ_11.3.0/com.ibm.swg.im.iis.common.doc/⇒
common/accessingiidoc.html
IBM Knowledge Center contains the most up-to-date version of the documentation.
However, you can install a local version of the documentation as an information
center and configure your help links to point to it. A local information center is
useful if your enterprise does not provide access to the internet.
Use the installation instructions that come with the information center installation
package to install it on the computer of your choice. After you install and start the
information center, you can use the iisAdmin command on the services tier
computer to change the documentation location that the product F1 and help links
refer to. (The ⇒ symbol indicates a line continuation):
Windows
IS_install_path\ASBServer\bin\iisAdmin.bat -set -key ⇒
com.ibm.iis.infocenter.url -value http://<host>:<port>/help/topic/
AIX® Linux
IS_install_path/ASBServer/bin/iisAdmin.sh -set -key ⇒
com.ibm.iis.infocenter.url -value http://<host>:<port>/help/topic/
Where <host> is the name of the computer where the information center is
installed and <port> is the port number for the information center. The default port
number is 8888. For example, on a computer named server1.example.com that uses
the default port, the URL value would be http://server1.example.com:8888/help/
topic/.
Your feedback helps IBM to provide quality information. You can use any of the
following methods to provide comments:
v To provide a comment about a topic in IBM Knowledge Center that is hosted on
the IBM website, sign in and add a comment by clicking Add Comment button
at the bottom of the topic. Comments submitted this way are viewable by the
public.
v To send a comment about the topic in IBM Knowledge Center to IBM that is not
viewable by anyone else, sign in and click the Feedback link at the bottom of
IBM Knowledge Center.
v Send your comments by using the online readers' comment form at
www.ibm.com/software/awdtools/rcf/.
v Send your comments by e-mail to [email protected]. Include the name of
the product, the version number of the product, and the name and part number
of the information (if applicable). If you are commenting on specific text, include
the location of the text (for example, a title, a table number, or a page number).
Notices
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003 U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Depending upon the configurations deployed, this Software Offering may use
session or persistent cookies. If a product or component is not listed, that product
or component does not use cookies.
Table 7. Use of cookies by InfoSphere Information Server products and components
Component or Type of cookie Disabling the
Product module feature that is used Collect this data Purpose of data cookies
Any (part of InfoSphere v Session User name v Session Cannot be
InfoSphere Information management disabled
v Persistent
Information Server web
v Authentication
Server console
installation)
Any (part of InfoSphere v Session No personally v Session Cannot be
InfoSphere Metadata Asset identifiable management disabled
v Persistent
Information Manager information
v Authentication
Server
installation) v Enhanced user
usability
v Single sign-on
configuration
If the configurations deployed for this Software Offering provide you as customer
the ability to collect personally identifiable information from end users via cookies
and other technologies, you should seek your own legal advice about any laws
applicable to such data collection, including any requirements for notice and
consent.
For more information about the use of various technologies, including cookies, for
these purposes, see IBM’s Privacy Policy at http://www.ibm.com/privacy and
IBM’s Online Privacy Statement at http://www.ibm.com/privacy/details the
section entitled “Cookies, Web Beacons and Other Technologies” and the “IBM
Software Products and Software-as-a-Service Privacy Statement” at
http://www.ibm.com/software/info/product-privacy.
IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at www.ibm.com/legal/
copytrade.shtml.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java™ and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
The United States Postal Service owns the following trademarks: CASS, CASS
Certified, DPV, LACSLink, ZIP, ZIP + 4, ZIP Code, Post Office, Postal Service, USPS
and United States Postal Service. IBM Corporation is a non-exclusive DPV and
LACSLink licensee of the United States Postal Service.
L
H Layout page O
highlighting data lineage 171 of the Table Definition dialog box 55
objects
host systems legal notices 259
creating new 248
adding 65 link
importing and exporting 179
adding 9
Oozie Workflow Activity
linking stages 31
properties 223
I links
deleting 32
opening a job 22
impact analysis 166, 167, 169, 170, 172 operational metadata 42
moving 31
examples 173, 175
multiple 32
impact analysis results 173
loading column definitions 35, 81
importing
external ActiveX (OLE) functions 184
locales P
and jobs 148 palette 4
stored procedure definitions 84
specifying 145, 148 parallel engine configuration file 242
table definitions 56
parallel job
input parameters, specifying 86
routines 105
inserting column definitions 33
intelligent assistants 237 M parameter definitions
data element 85
mainframe jobs
key fields 84
configuring 150
length 84
J mainframe routines 134
copying 137
scale factor 84
JCL templates 203 parameter sets 89
manually entering
job using in sequence jobs 94
stored procedure definitions 85
compiling 17 parameters
table definitions 66
configuring 12 creating in sequence jobs 213
Message Handler Manager
creating 8 inserting in jobs 93
using 244
designing 31 Parameters grid 84
message handlers 42, 146
parallel 1 performance monitor 40
metadata 58
running 17 pre-configured stages 38, 97, 102
importing 7, 60
saving 8, 22 product accessibility
importing by using Metadata Asset
searching 161 accessibility 251
Manager 186
job activity product documentation
managing 65, 66
properties 220 accessing 255
sharing 58
job compilation 200 properties
synchronizing 64
job control routines 235 job activity 220
tables 64
job designs nested condition 221
wizard 60
annotations 38
monetary formats 149
description annotations 38
moving
job log 17
job parameters 90
links 31 Q
stages 30 quick find 161
inserting 93
multiple links 32
using 90
multivalued data 52
job properties
saving 151 R
job reports 195 reference links 25, 27
generating 195 N renaming
stylesheets 197 naming 81 BASIC routines 137
job routines column definitions 81 link 9
before- or after 144, 148 data elements 131 stage 9
jobs links 29 stages 30
compilation checks 199 mainframe routines 134 report 177
creating environment variables 92 parallel job routines 105 Reporting Console 177
defining locales 145, 148 parallel stage types 108 repository 4
defining maps 145, 148 server job routines 123 Repository Advanced Find window 173
dependencies, specifying 191 shared containers 24, 99 Routine dialog box 107
developing 21, 29, 247 stages 24 Dependencies page 124
SQL
job dependencies 191 U
using
data precision 52, 84
Data Browser 39
data scale factor 52, 84
data type 52, 84
display characters 52, 85
stage V
adding 9 version number for a job 143, 146, 150
stage editor 12 viewing
stage page BASIC routine definitions 126
properties 101 containers 96
stages 23 data elements 132
adding 29 stored procedure definitions 87
column definitions for 33 table definitions 82
deleting 30
editing 32
moving 30
renaming 30
W
where used query 166, 167, 169, 170, 173
specifying 29
stored procedure definitions 40, 58
creating 85
dialog box 67 X
editing 87 XSLT stylesheets for job reports 197
error handling 87
importing 84
manually defining 85
Index 267
268 Designer Client Guide
Printed in USA
SC19-4272-00
Spine information:
IBM InfoSphere DataStage and QualityStage Version 11 Release 3 Designer Client Guide