DataStage Technical Design and Construction Procedures
DataStage Technical Design and Construction Procedures
DataStage Technical Design and Construction Procedures
Created by:
Mark Getty
Version 1.15
Key Contributors:
1.0 DataStage Technical Design and Construction
1.1 Purpose
2.0 Technical Design Procedures
2.1 Roles and Responsibilities
3.0 General Construction Procedures
3.1 DataStage Job Life Cycle
3.2 New Project
3.2.1 Documentation guidelines
3.2.2 Standards
3.2.2.1 DataStage Environment
3.2.2.2 DataStage Objects
3.2.2.3 DataStage Jobs
3.2.2.4 Job Execution
3.2.2.5 Exception/Error Handling
3.2.2.6 Versioning
3.2.2.7 Stages
3.2.2.8 Links
3.2.2.9 File Names / Table Names
3.2.2.10 Table/File Definitions/Metadata (Source/Target)
3.2.3 Helpful Hints
3.2.4 Checkin to Dimensions
3.3 Update Existing Project
3.3.1 Checkout from Dimensions/Import to DataStage
4.0 Specific Sourcing Procedures and Notes
4.1 Tandem
4.2 DB2
4.3 Mainframe Flat Files
4.3.1 Cobol Include Import
4.3.2 FTP from Mainframe
4.4 Oracle
4.5 SAS?
4.6 Other?
5.0 Specific Target Procedures and Notes
5.1 Tandem
5.2 DB2
5.3 Oracle
5.4 SAS
5.5 Other?
6.0 Unit Testing Procedures
6.1 Creating Job Control
6.2 Creating a script
7.0 Promotion to Test
8.0 Promotion to Production
9.0 Unscheduled Changes
Appendix A: Tandem Extracts using Genus
Overview of Tandem Data Extract Design using Genus
Error Handling
Genus Integration with DataStage
Defining Tables to Genus
Appendix B: Table/File Catagories
Appendix C: Deleting ‘Saved’ Metadata
Key Contributers:
Lisa Biehn
ETL Standards Committee
Joyce Engman
Larry Gervais
Patrick Lennander
Dave Fickes
Mark Getty
Sohail Ahmed
Kevin Ramey
Sam Iyer
Babatunde Ebohon
Kelly Bishop
1.0 DataStage Technical Design and Construction
1.1 Purpose
The purpose of this document is to provide a forum for Company specific DataStage technical
design and construction procedures and recommendations.
Top
Coordination of all activities is the final responsibility of the project team. The Project Lead must
communicate with Shared Services, Information Architecture, Database Analysts, and Tech
Support to ensure that the needed tasks are performed in the proper sequence.
Shared Services
Should be brought in as early as possible at the beginning of the project. Helps with the
infrastructure decisions.
Information Architecture
Helps in the design and modifications of tables. Ensures naming standards are followed for entity
and attribute names.
Database Analysts
Helps in the design and creation of new tables in the appropriate environment. Ensures that
there is enough space allocated for the tables.
Project Team
A ‘Project Administrator’ should be assigned. This person should be contacted whenever a new
element such as a job or job control is created to ensure that the new element is uniquely named.
This person will also maintain a cross reference between production scripts and the jobs they
execute. The Project Administrator should also be contacted to promote and compile elements in
production, and ensure the elements are checked back into Dimensions.
Top
3.0 General Construction Procedures
3.1 DataStage Job Life Cycle
Top
3.2 New Project
3.2.1 Documentation guidelines
Top
3.2.2 Standards
3.2.2.1 DataStage Environment
Each data warehouse will be considered a “Project” within DataStage and will be assigned a
3 character high level qualifier similar to an application identifier. This qualifier will be used
by development projects as the first three characters of Job and Job Control names.
Multiple Development Teams may work within the same DataStage “Project” in order to
share ETL Objects and metadata.
Folders or Categories will be created under each data warehouse project within DataStage in
order to manage ETL Objects created by various Development Teams.
Keep all objects together in one project in order to support MetaStage functions. For instance,
to properly perform impact analysis it is necessary to have both jobs and tables present
with a project. Even though tables are not needed at execution time in a production
environment, they should nonetheless be placed in the production project – otherwise it will
be impossible to perform impact analysis.
3.2.2.6 Versioning
All the DataStage Components should be checked in/out to/from PVCS Dimensions in
respective projects.
All the modifications to existing components should be done after getting the code from
PVCS Dimensions.
3.2.2.7 Stages
The first 3 to 4 characters of the stage name should indicate the stage type.
Beyond the stage type indicator, the rest of the name should be meaningful and descriptive. The
first character of the stage type portion of the name should be capitalized. Capitalization rules
beyond the first character are up to the discretion of the Project Team.
AGGR – Aggregator
CHCP – Change Capturer
CHAP – Change Apply
COPY – Copy
CPRS – Compress
CMPR – Compare
CREC – Combine Records
CIMP – Column Import
CGEN – Column Generator
CEXP – Column Export
DSET – Dataset
DIFF – Difference
DCDE – Decoded
DB2 – DB2
ETGT – External Target
ESRC – External Source
FSET – File Set
FUNL – Funnel
GNRC – Generic
HASH – Hash
HEAD – Head
XPS – Informix XPS
JOIN – Join
LKUP – Lookup
LKFS – Lookup File set
MRGE – Merge
MVEC – Make Vector
MKSR – Make Sub-record
ORCL – Oracle
PRSR – Promote Sub-record
PEEK – Peek
PSAS – Parallel SAS DS
RDUP – Remove Duplicates
RGEN – Row Generator
SAMP – Sample
SAS – SAS
SEQL – Sequential
SORT – Sort
SPSR – Split Sub-record
SVEC – Split Vector
TAIL – Tail
TERA – Teradata
TRNS – Transform
WRMP – Write Range Map
BLDO – Build Op
WRAP – Wrapper
CUST – Custom
3.2.2.8 Links
Note: Items between <> are optional.
All links prior to the final active (transform or aggregator) stage will be named
except for links from or to a passive stage, and links from a lookup. All link names should be
defined with a meaningful name and describe what data is being carried along the link.
except for links from or to a passive stage, and links from a lookup.
Following is an example:
3.2.2.9 File Names / Table Names
Environment variables or parameters can be used for File paths instead of hardcoding. File
names and table names must be hardcoded for accurate MetaData analysis. File names must
begin with the project’s assigned 3 character qualifier. The file name may include the job name
with a descriptive file extension. Examples of common file extensions follow:
The table/file definitions include; cobol copy books, flat files and other
types of source metadata imported with the tools in the manager, e.g., DB2
Plug-In.
For the metadata to be successfully loaded into DataStage and later on into
MetaStage, there are several steps that need to be followed:
1. Make sure that the metadata for all of the passive stages is alreadyloaded into the Table
definitions (see image below). If the source metadata is modeled in Erwin, import that
metadata into Metastage and then export it to Datastage. Work with the MetaStage admin to
create a set of meaningful import categories to hold the various types of source and target
table definitions. See Appendix B for a sample list of categories that might be useful for a
project.
Note!!! When a table is imported to DataStage from Erwin, all related tables are also
imported (by default). It is important that the related tables are not deleted from
DataStage, as this will cause issues when importing the DataStage project to
MetaStage. The tendency may be to delete them since they are most likely not needed
by the project. For example, the table ITEM has a relationship to the table SIZE. The
project uses the ITEM table, but not the SIZE table. If a table is accidentally deleted,
reimport it from Erwin.
It is also important not to move table/file definitions to different categories after they
have been added to jobs. If you accidentally do this, you need to run usage analysis
on the moved table to identify what jobs are affected, move the table back to the
correct category, and then fix the link by dragging and dropping the definition on the
link or by editing the underlying DataStage code. Usage analysis and editing
underlying DataStage code are discussed in detail in Appendix C: Deleting ‘Saved’
Metadata. Instead of replacing the incorrect category with spaces, you would replace
it with the correct category.
Project lead must request from the MetaStage admin that the required definitions and table
structures be imported to MetaStage. Use the IA request form on an ongoing basis to bring
metadata into the Datastage repository. These definitions and table structures will then be
copied into DataStage via MetaStage. The MetaStage admin will do this as part of the
original request.
2. Build your job the way it is going to look depending on the requirements.
3. Either drag the metadata into the passive link or open the stage, go to the columns and click
on Load. For the main input table/file it is recommended to load all of the columns so if there
is a need for other columns, they are already loaded. For the Lookups, it is recommended to
only load the necessary columns (ex table_code and table_seq_id if doing a lookup for a
surrogate key).
After loading the metadata into passive stage’s links, the links show a small yellow square
indicating that it contains metadata assigned to it
4. After loading the metadata into all of the passive stages, map the records by dragging the
necessary columns from the Input link to the Output link. Do not bring the metadata from the
table/ file definitions into a link that is going from an active stage to another active stage. This
will cause confusion in MetaStage and extra links to table that are not necessary.
After dragging the needed columns, the stage should look like the image above. If temporary
columns are needed (i.e. columns not on the tables/files that are only needed between active
stages), add them directly to the active stage links. Do not press the “Save” button.
5. For the transformers stage, the columns for the input come from the mapping from active
stage to active stage. As you can see in the image above, the columns on the target side are
all in red. This is because the columns need to be mapped. You can either drag one column
at a time and drop it into the derivation for the target column or use the auto map utility in the
transformer to map all of the columns at once. Note that if there are existing derivations
already, the automap will overwrite them.
6. If modeled in Erwin, changes to the definitions and table structures used by the passive
stages must be made in Erwin and then pushed out to DataStage via MetaStage.
If not modeled in Erwin, changes to the definitions and table structures must be made in the
appropriate DataStage Table Definition category.
Then in DataStage, the definitions and table structures must be reloaded into each stage
which uses them. This can be done by explicitly loading the structure or by dragging and
dropping the structure onto the link. This will preserve the linkage and enable the metadata
analysis.
Note!!! If you have accidentally saved metadata into the ‘Saved’ folder from an active
link, refer to Appendix C: Deleting ‘Saved’ Metadata.
7. Usage Analysis can be performed using DataStage Manager.
The above example shows that the Fact table layout is used in two jobs, as an input and
output in each job. This verifies that the layout structure is used by only the passive stages.
Top
In Control-M scripts, set the Warning flag to 0. This will allow unlimited warning messages.
Otherwise the default is 50.
When starting or restarting jobs outside of Control-M, if the box has been rebooted the default
parameters will have been reset, such as abending after 50 messages. Ensure parameters
are what you expect or use Control-M. Parameters to watch out for are:
Director/tools/option/no-limit
Administrator/projects/properties/general/up to previous
Use a Configuration File with a maximum of 4 nodes for all Parallel DataStage Jobs. The
Configuration File is set up and maintained using Manager/Tools/Configuration. Each job
then needs to refer to the appropriate Configuration file through the Job Properties
accessed via Designer. The variable is $APT_CONFIG_FILE.
For all Server Jobs using Inter-process row buffering, increase the Time Out parameter to 20
seconds.
Administrator/Projects/Properties/Tunables/Enable row buffer box checked
Administrator/Projects/Properties/Tunables/Interprocess button selected
Long running/high volume Parallel Jobs (e.g. +30 minutes/+10 million rows) should add the
following Parallel Environmental Variable to Parameters:
APT_MONITOR_SIZE = 100000
APT_MONITOR_TIME = 5 (Must be set to 5 for APT_MONITOR_SIZE to work
properly)
C command COUT no longer puts output to a file as it did in DataStage 7.0. It instead puts it
out to a log which can cause the mount point to fill up.
Sample script executed by Control-M: Note that a time delay is used to check the status, and
several types of status’s are checked for.
#!/usr/bin/ksh
# top-level script to run DataStage Job GTLJC0001, intended to be
called
# from CntlM
prg=${0##*/}
export PATH=/usr/bin:/usr/sbin:/apps/Ascential/DataStage/DSEngine/bin
sleep_secs=${1:-60}
job=GTLJC0001
Top
3.2.4 Checkin to Dimensions
Within your project file, create a subdirectory EtlImportExport that can be used for imports and
exports from Dimensions and DataStage. Within EtlImportExport, create subdirectories as
shown: Data Elements, Jobs Designs, etc. You can create additional subdirectories as needed,
even subdirectories within subdirectories. For example, under the Scripts subdirectory you may
want to create a subdirectory for Control-M executed scripts and a subdirectory for helper scripts.
Access DataStage Manager… log on to the box from which you want to extract your source.
Within your project file, select the element to be exported. In this case it is a job. Left click on
Export, DataStage Components…
Export job design to an appropriate file directory using ‘.dsx’ as a suffix.
Note: If you export the ‘executable’, you will still need to compile the source when you move it to
a new box.
Access Dimensions and verify workset…
Ensure directory is filled in with the file path where the source was exported to from DataStage.
Left click on Workset Structure…
Create directories to match EtlImportExport subdirectories if not already done. You can do this by
right clicking on GST_SCORE:WSO in this case and using the option ‘New Directory’.
Once this is done, right click on Job Designs…
and select New Item.
Browse for file…
Open item to be checked in…
Create new item…
Reply OK to message… and Close.
You should now see that the job has been imported and checked into Dimensions.
Top
3.3 Update Existing Project
3.3.1 Checkout from Dimensions/Import to DataStage
Top
4.0 Specific Sourcing Procedures and Notes
4.1 Tandem
See appendix Appendix A: Tandem Extracts using Genus. Following are screens relating to
sourcing from Tandem in a parallel mode. Note that we start with a ‘Sequencer’. This means that
when we go to production, it is with a sequencer, not just a job.
The InitiateGenus (Execute Command) stage makes the connection to Tandem via Genus.
Please see the appendix for a more thorough description of the command line and all
parameters. An example of the command line for sequential mode from test (Dave):
An example of the command line for parallel mode from production (GDBx) using the GAA
instance (Note that the Genus instance is defined on GDB2):
An example of the command line for parallel mode from production (GDBx) using the ETL
instance (Note that the Genus instance is defined on GDB4):
Top
4.2 DB2
The DB2 API Database stage must be used when connecting two different types of platforms.
For example, a DataStage job running on a Sunsolaris Unix box which accesses data from a
mainframe DB2 table must use the DB2 API Database stage. Likewise, a Unix Sunsolaris box
which accesses data from an IBM Unix AIX box must also use the DB2 API Database stage. A
job running on an IBM Unix AIX box which accesses data from a DB2 table (UDB) on an IBM
Unix AIX box should use the DB2 Enterprise Database stage.
4.3 Mainframe Flat Files
4.3.1 Cobol Include Import
Access DataStage Designer. If the COBOL FD file already exists, right click on it to Import a
Cobol File Definition. If the COBOL FD file does not exist, right click on Table Definitions to
Import a Cobol File Definition and the COBOL FD file will be created automatically.
Get to the directory where you Ftp'd the include.
Select the appropriate include and open...
Left click on Import...
You should now see the include listed in the COBOL FD file.
If you need to specify information about the include, double click on it.
In this case, the file has fixed width columns with no space between columns.
Top
Double left click on the Complex Flat File to access it’s properties:
Note that the Data Format is set to EBCDIC and the Record Style is Binary.
To verify DataStage is reading the file correctly, click on View Data at this point.
Top
4.4 Oracle
4.5 Other
DataStage does not support updating Tandem tables. To update Tandem tables, create flat files
which can be ftp’d to Tandem. An update process on Tandem will then need to be created.
Projects such as Guest Scoring have used Data Loader.
5.2 DB2
See 4.2 DB2.
5.3 Oracle
5.4 SAS
Note: Tandem uses a default date of “01-01-0001”. SAS converts this to “01-01-2001”. Be
aware that you may have to modify Tandem dates of “01-01-0001” to another date such as “12-
31-9999” depending on the target database.
5.5 Other
Top
6.0 Unit Testing Procedures
The following illustration shows an existing application and properties behind each job.
Top
Note: The CMN/WBSD process is supported for promoting to test, but not required.
Top
Top
9.0 Unscheduled Changes
1. Scripts and DataStage components need to be checked out from Dimensions, modified, and
checked back into Dimensions by the programming staff.
2. An urgent CMN needs to be created, and the objects staged into WBSD.
3. ETL oncall will need to be paged and informed that they need to unprotect the project.
4. WBSD oncall will need to be paged and informed that they must run their process for moving
the changed objects into production.
Top
Appendix A: Tandem Extracts using Genus
Extracting data from the Tandem database for Guest Scoring requires the use of the Genus Data
Transfer Tool. In order to initiate the data transfer from within Ascential DataStage, a Genus tool
called xfercmd is executed from a command line wrapper stage in DataStage. Xfercmd accepts
transfer specification parameters at the command line prompt and sends the data transfer
request to Tandem.
When executed, the xfercmd program initiates a connection to a specific table or view in Tandem,
and extracts data from the table based on the set of parameters passed in the command line. In
order to achieve optimum performance, the Guest Scoring project design takes advantage of two
primary features in xfercmd: Node Aware, and the creation of Unix Named Pipes.
Node Aware: This feature was added by Genus to allow the option to start the data transfer
process across all local nodes, using all logical partitions, as opposed to routing all data through a
configured primary node. This requires the user to configure the parameters for all participating
NSK nodes.
Named Pipes: This allows Genus to extract data from Tandem and send it to a Unix Named Pipe,
as opposed to landing the data as a file once its extracted. This improves performance when
extracting large amounts of data because DataStage can be configured to read data directly from
the Named Pipes, thus avoiding the additional time to land the file for DataStage.
The following list highlights the options for executing the xfercmd tool, along with specific
examples of how each switch is being used in the Guest Scoring design.
-o <table/view name> option specifies sqlmx table/view name from which data needs to be
extracted.
-at <associated table name> option specifies sqlmx table name associated with a view. This
parameter is required only in case of multi stream view extract.
-sp <sql predicate> option specifies sql predicate which needs to be appended to the query
generated for extraction.
Example for Guest Scoring: this option allows the job to extract only the subset of data required.
To extract only data in partition number 15:
-et <execution time> specifies execution time at which the extraction will start. If not specified
extraction will be started immediately.
For Guest Scoring, Node Aware allows for parallel extracts from Tandem by sending data using
all 4 nodes, across all 64 partitions. When Node Aware is used, the –tf switch must be used
indicate the node name and target folder:
-nS <no. of streams> option for specifying the number of streams. If node aware option is
specified this option will be ignored.
For Guest Scoring, this option was used only during testing since parallel processing is required
for performance. It is used along with the –tf switch to indicate the target folder and file(s) for the
file or named pipe. The following example is using 2 streams, sending data to two named pipes.
-cpu <cpu no.> option for specifying CPU number. Applicable only to single stream transfer.
-pp <process priority (L/M/H)> option for specifying the process priority of extraction processes. L,
M and H specify Low, Medium and High respectively. Default is M (Medium).
-cr <compression ratio (L/M/H)> option for specifying the data compression ratio for extracts. L, M
and H specify Low, Medium and High respectively. Default is L (Low).
-ol <output location (FILE/SAS/PIPE)> option for specifying the type of output location.
FILE indicates data will be put into the file specified by the Target File option.
SAS indicates data will be put into the SAS dataset specified by the Target File option.
PIPE indicates a named pipe with the name specified by the Target File option will be created.
Third party applications can read from the created named pipe and import the data into their
system.
Guest Scoring uses the option of PIPE, which sends the data to a named pipe, where it is there
read by DataStage.
-df <data format (DF/FF)> option for specifying the data format of extracted data. DF indicates
Delimited Data Format, where FF indicates Fixed Width Data Format.
This option is not used for Guest Scoring. Headers are not required on the data files.
-fd <field delimiter (|/,/;/!)> option for specifying field delimiter to be used in the extracts. Default is
| (pipe character).
-rd <record delimiter (CR/LF/CRLF)> option for specifying record delimiter to be used in the
extracts. CR indicates Carriage Return, LF indicates Line Feed and CRLF indicates combination
of Carriage Return and Line Feed.
-tc option indicates that character data in the extracts be trimmed. This option is valid for
Delimited Data Format only. Default is no character trimming.
-dtc <date-time format (SAS/MS)> option specifies the data time conversion routines to be used
on date fields. SAS indicates format equivalent to SAS and MS indicated format equivalent to
Microsoft SQL Server.
-tf <target file> option specifies the location of remote files. There should be one entry for each
stream. For node aware transfers ‘<node-name>=<target-folder>’ combination must be used.
Guest Scoring uses this option along with the Node Aware option to specify the Node and
destination folder for the named pipes.
Following is an example of xfercmd using all required parameters to extract purchase data
records added since a given date from MITM into 64 named pipes:
Top
Error Handling
Every time a Genus table extract is performed, Genus logs rows in a set of Tandem tables.
These tables provide information on the success or failure of each partition, as well as the
number of records extracted for each partition. After executing the xfercmd command line to
extract the data, a script will run to query these tables for failure messages, and return a success
or failure, along with a row count, to DataStage. The DataStage transformation job will either run,
run with errors, or not run based on the output of this script.
Return Partial Success to DataStage with number of streams aborted and number of streams
successful..
Top
Genus Integration with DataStage
In order to execute xfercmd from within DataStage, an “Execute Command” stage is required with
which to wrap the command line and its parameters. This stage, when executed, will call the
command line wrapped within it.
The DataStage Sequence canvas shown above contains an “Execute Command” stage.
The command line required for a particular job is embedded in this stage. The above screen shot
shows the properties of the “Execute Command” stage, and how the command line is embedded
within it.
When this job is executed successfully, and all named pipes have been created, it will return an
‘OK’ to DataStage at which point the named pipes can be read by other DataStage jobs.
The DataStage jobs to extract data for the various GIFs will include a Sequence job for each
table. Each Sequence job will include, at a minimum, an “Execute Command” stage and a “Job
Activity” stage. The “Execute Command” stage will contain the xfercmd command line and the
required parameters for that particular table. The “Job Activity” stage will execute the DataStage
job to read and transform the data from the named pipes created from that particular table. For
details related to the “Job Activity” stage, please see the Data Transformation Design section of
this document.
For example, there will be two jobs to extract and transform data from MITM. In this example, the
Execute Command job is called “ExtractMITM.” Once this stage completes, the job continues to
the next stage, which is a Job Activity Stage called “ReadMITM.” This stage simply calls
whichever DataStage Parallel job is used to read and transform the data from MITM.
Top
Defining Tables to Genus
In order for Genus to extract data from a table, that table must be defined to Genus for your
schema.
If you do not see the table you want to extract data from in the Table/View drop down list, then
you will need to add an entry into the mpalias table on Tandem (GDB2).
1. Find the Guardian name for the table. Logon to GDB2 and in sqlci, issue the following
command (assume the ansi name for the table is SCORE_TEST and the schema it was
added to by the DBA’s is ODBC_TFSOUT):
>>select * from $dsmscm.sql.mpalias where ansi_name like
+>”%TFSOUT.SCORE_TEST%” browse access;
2. Insert an entry into the mpalias table of the Guardian table name for your schema. The
following example uses the ETL_MANAGER schema:
>>insert into \gdb4.$dsmscm.sql.mpalias values (
+>”GDB4_GD4P01_MINCAT.ETL_MANAGER.SCORE_TEST”, ”TA”,
+>”\GDB1.$GD1406.TFSOUT.SCORTST”, 999999999999999999);
Top
Appendix B: Table/File Catagories
The following is a list of table categories that would exist in the Datastage repository for the
purpose of organizing the various types of metadata that an ETL developer might need to
populate the links to various passive stages. A project should work with the Metastage
administrator to design the Datastage repository table definition categories. Then imports of that
data should take place prior to construction. The ETL developer would simply load the links from
tables in these categories. And if changes are made to the metadata, e.g., cobol copy book
metadata when adding aggregation columns then that changed metadata would be saved into the
Cobol FD Changed Tables category.
Note: These categories can be combined or new ones created. This is intended as a starting
point for a project.
Top
Appendix C: Deleting ‘Saved’ Metadata
If you have accidentally saved metadata into the Table Definitions ‘Saved’ folder from an active
link,
you will see the link name appear in the ‘Saved’ folder.
To see all programs referenced by the ‘Saved’ metadata, go to the DataStage Manager tool and
run usage analysis on it.
In this example, this report shows that the ‘Saved’ metadata is used in only one job.
2. The second way leaves the mappings and derivations as is, but requires that you edit
underlying DataStage code in an exported file.
The remainder of this appendix shows screen shots and directions for this second method.
To prepare for the editing, at the usage analysis report screen, highlight the Source, right click,
and then left click on Copy.
In DataStage Manager, export the job.
You will actually need to export the job before viewing the underlying code. Remember that if you
have the job open in Designer, you will not be able to export it. So close it in Designer if it is
open.
Click on Export after designating where the job is to be exported to.
After successfully exporting, click on the Viewer tab.
Select ‘Run this program’ and type in ‘notepad’. Then click on View.
Once Notepad is brought up, click on the Edit menu option, then click on Replace.
Paste in (cntl-V) the Source that was copied during the editing preparation. Add a forward slash
to each existing forward slash.
Leave ‘Replace with:’ blank.
Click on ‘Replace All’.
You can verify there are no more occurances of the Source by clicking on ‘Find Next’.
Close out the Replace screen.
Save your changes by clicking on File, then Save. Or if you close the Notepad, you will see the
following screen:
Click on OK.
To verify changes, perform another usage analysis on the link name.
It should result in an empty report.
It should now be safe to delete the link metadata from the ‘Saved’ folder.
Click on Yes.
Top