Data Loading
Data Loading
Windchill® 8.0
June 2005
Copyright © 2005 Parametric Technology Corporation. All Rights Reserved.
Information described herein is furnished for general information only, is subject to change without notice, and
should not be construed as a warranty or commitment by PTC. PTC assumes no responsibility or liability for any
errors or inaccuracies that may appear in this document.
The software described in this document is provided under written license agreement, contains valuable trade
secrets and proprietary information, and is protected by the copyright laws of the United States and other
countries. It may not be copied or distributed in any form or medium, disclosed to third parties, or used in any
manner not provided for in the software licenses agreement except with written prior approval from PTC.
Product Development Means Business, Product Makes the Company, PTC, the PTC logo, PT/Products, Shaping
Innovation, The Way to Product First, and Windchill.
Third-Party Trademarks
Adobe, Acrobat, Distiller and the Acrobat Logo are trademarks of Adobe Systems Incorporated.
Advanced ClusterProven, ClusterProven, and the ClusterProven design are trademarks or registered trademarks
of International Business Machines Corporation in the United States and other countries and are used under
license. IBM Corporation does not warrant and is not responsible for the operation of this software product. AIX
is a registered trademark of IBM Corporation. Allegro, Cadence, and Concept are registered trademarks of
Cadence Design Systems, Inc. Apple, Mac, Mac OS, and Panther are trademarks or registered trademarks of
Apple Computer, Inc. AutoCAD and Autodesk Inventor are registered trademarks of Autodesk, Inc. Baan is a
registered trademark of Baan Company. CADAM and CATIA are registered trademarks of Dassault Systemes.
COACH is a trademark of CADTRAIN, Inc. DOORS is a registered trademark of Telelogic AB. FLEXlm is a
trademark of Macrovision Corporation. Geomagic is a registered trademark of Raindrop Geomagic, Inc.
EVERSYNC, GROOVE, GROOVEFEST, GROOVE.NET, GROOVE NETWORKS, iGROOVE, PEERWARE,
and the interlocking circles logo are trademarks of Groove Networks, Inc. Helix is a trademark of Microcadam,
Inc. HOOPS is a trademark of Tech Soft America, Inc. HP-UX is a registered trademark Hewlett-Packard
Company. I-DEAS, Metaphase, Parasolid, SHERPA, Solid Edge, and Unigraphics are trademarks or registered
trademarks of UGS Corp. InstallShield is a registered trademark and service mark of InstallShield Software
Corporation in the United States and/or other countries. Intel is a registered trademark of Intel Corporation. IRIX
is a registered trademark of Silicon Graphics, Inc. LINUX is a registered trademark of Linus Torvalds, MainWin
and Mainsoft are trademarks of Mainsoft Corporation. MatrixOne is a trademark of MatrixOne, Inc.
Mentor Graphics and Board Station are registered trademarks and 3D Design, AMPLE, and Design Manager are
trademarks of Mentor Graphics Corporation. MEDUSA and STHENO are trademarks of CAD Schroer GmbH.
Microsoft, Microsoft Project, Windows, the Windows logo, Windows NT, Visual Basic, and the Visual Basic
logo are registered trademarks of Microsoft Corporation in the United States and/or other countries. Netscape
and the Netscape N and Ship's Wheel logos are registered trademarks of Netscape Communications Corporation
in the U.S. and other countries. Oracle is a registered trademark of Oracle Corporation. OrbixWeb is a registered
trademark of IONA Technologies PLC. PDGS is a registered trademark of Ford Motor Company. RAND is a
trademark of RAND Worldwide. Rational Rose is a registered trademark of Rational Software Corporation.
RetrievalWare is a registered trademark of Convera Corporation. RosettaNet is a trademark and
Partner Interface Process and PIP are registered trademarks of RosettaNet, a nonprofit organization. SAP and R/3
are registered trademarks of SAP AG Germany. SolidWorks is a registered trademark of SolidWorks
Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of
SPARC International, Inc. in the United States and in other countries. Products bearing SPARC trademarks are
based upon an architecture developed by Sun Microsystems, Inc. Sun, Sun Microsystems, the Sun logo, Solaris,
UltraSPARC, Java and all Java based marks, and "The Network is the Computer" are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and in other countries. TIBCO, TIBCO Software,
TIBCO ActiveEnterprise, TIBCO Designer, TIBCO Enterprise for JMS, TIBCO Rendezvous,
TIBCO Turbo XML, TIBCO BusinessWorks are the trademarks or registered trademarks of TIBCO Software
Inc. in the United States and other countries. WebEx is a trademark of WebEx Communications, Inc.
Parametric Technology Corporation, 140 Kendrick Street, Needham, MA 02494 USA 010505
Contents
Using the CSV2XML Utility and Validating the XML Format .......................... 3-1
About the CSV2XML Utility.........................................................................................................3-2
Converting CSV Files for Multibyte Operating Systems .............................................................3-2
Contents vii
Converting CSV Files to XML Format Files ............................................................................... 3-2
Working with Larger Files .......................................................................................................... 3-3
CSV2XML Arguments ................................................................................................................ 3-3
Validating the XML Format ........................................................................................................ 3-5
Selecting the Appropriate DTD at Runtime ......................................................................... 3-5
Using XML Spy to Validate the Data Files .......................................................................... 3-6
Loading Legacy Data Using the LoadFromFile and LoadFileSet Utilities..... 4-1
Overview: Data Loading Utilities ................................................................................................ 4-2
Using the LoadFromFile Utility ................................................................................................... 4-2
Sample Command Lines ..................................................................................................... 4-4
More About CONT_PATH ................................................................................................... 4-4
Using the LoadFileSet Utility ...................................................................................................... 4-5
About the Load Files .................................................................................................................. 4-6
Load Set File ....................................................................................................................... 4-6
Sample Load Set File .......................................................................................................... 4-7
Object XML Files ................................................................................................................. 4-7
Loading Legacy Data ................................................................................................................. 4-8
Working with Containers ............................................................................................................ 4-9
Containers in PDM and Windchill PDMLink ........................................................................ 4-9
Loading Product Objects and Parts: Windchill PDMLink Example ............... 6-1
Before You Begin ....................................................................................................................... 6-2
Running the LoadFromFile Utility ........................................................................................ 6-2
Loading Users ............................................................................................................................ 6-5
Creating Organizations .............................................................................................................. 6-5
Creating Product Containers...................................................................................................... 6-6
Creating Assemblies ......................................................................................................... 6-10
Contents ix
x Windchill Data Loading Reference and Best Practices Guide
Change Record
Change Description
xi
xii Windchill Data Loading Reference and Best Practices Guide
About This Guide
The Windchill Data Loading Reference and Best Practices Guide provides a
detailed description of the LoadFromFile framework and the LoadFromFile,
LoadFileSet, and CSV2XML utilities. It also provides best practices for preparing
and loading legacy data, with specific examples for a Windchill PDMLink and
Pro/INTRALINK 8.0 system.
Related Documentation
The following documentation may be helpful:
• Windchill Installation and Configuration Guide -- Windchill
• Windchill Customizer’s Guide
• Getting Started with Pro/INTRALINK 8.0
If these books are not installed on your system, see your system administrator.
Technical Support
Contact PTC Technical Support via the PTC Web site, phone, fax, or e-mail if you
encounter problems using Windchill.
For complete details, refer to Contacting Technical Support in the PTC Customer
Service Guide enclosed with your shipment. This guide can also be found under
the Support Bulletins section of the PTC Web site at:
http://www.ptc.com/support/index.htm
The PTC Web site also provides a search facility that allows you to locate
Technical Support technical documentation of particular interest. To access this
page, use the following link:
http://www.ptc.com/support/support.htm
You must have a Service Contract Number (SCN) before you can receive
technical support. If you do not have an SCN, contact PTC License Management
using the instructions found in your PTC Customer Service Guide under
Contacting License Management.
xiii
Documentation for PTC Products
PTC provides documentation in the following forms:
• Help topics
• PDF books
To view and print PDF books, you must have the Adobe Acrobat Reader installed.
All Windchill documentation is included on the CD for the application. In
addition, books updated after release (for example, to support a hardware platform
certification) are available from the Reference Documents section of the PTC
Web site at the following URL:
http://www.ptc.com/appserver/cs/doc/refdoc.jsp
Comments
PTC welcomes your suggestions and comments on its documentation—send
comments to the following address:
[email protected]
Please include the name of the application and its release number with your
comments. For online books, provide the book title.
Documentation Conventions
Windchill documentation uses the following conventions:
Third-Party Products
Examples in this guide referencing third-party products are intended for
demonstration purposes only. For additional information about third-party
products, contact individual product vendors.
Code Examples
Some code examples in this guide have been reformatted for presentation
purposes and, therefore, may contain hidden editing characters (such as tabs and
end-of-line characters) and extraneous spaces. If you cut and paste code from this
manual, check for these characters and remove them before attempting to use the
example in your application.
Section Page
Overview of Load Utilities..................................................................................1-2
When Do I Use LoadFromFile and Import/Export? ...........................................1-5
Existing Load Methods .......................................................................................1-6
Supported by
Object Loader Import/Export?
This chapter describes the data loading process and the tool options for cleansing
legacy data prior to loading it to the Windchill database.
Section Page
The Data Loading Process...................................................................................2-2
Data Cleansing: Overview of the Tools Available..............................................2-3
Working with a Sample Data Set ........................................................................2-7
Creating Data Extraction and Format Requirements ..........................................2-8
Tip: It is strongly recommended that PTC review the raw data prior to loading.
This ensures that resource bundles are properly prepared and data is of expected
format and length. When possible, data should be review and evaluated prior to
engagement activities.This also allows for proper time and resource estimates to
be created.
In this process, there is a built in review process prior to loading the data. For
example, a data file of parts can be examined to produce a unique list of Part
Types. This list should then be compared against the PartTypeRB.rbInfo file.
Similarly, a unique list of part quantities can be compared to the
QuantityUnitRB.rbInfo file.
The review process and data cleansing typically require the use of a tool to
examine the data. The following section reviews the tools options for data
cleansing.
Note: In the case of Windchill CounterPart, it can also serve as a conversion tool.
Using a tool such as CounterPart provides a layer of abstraction and protects the
customer from changing load file formats if the PTC loaders change. For more
information on CounterPart, refer to Windchill CounterPart: Pros and Cons
below.
Note: As of Windchill 7.0, data files must be in XML format and must comply
with the DTD supplied with the Windchill installation (available from the
standardX05 directory). Refer to Validating the XML Format later in this guide
for information on creating and validating XML data files.
The following sections provide the pros and cons of different tool options for
cleansing the data.
Pros
• Simple to use
• Usually available on most operating systems
• Provide a quick review of data
• Minimal learning curve
• Can be used for simple search and replace
• No additional "coding" or preparation is required to view data in this tool
• Works for XML, CSV, or other text files
Pros
• Simple to use
• Usually available on most Windows operating systems
• Provides a quick review of data
• Can be used for simple search and replace
• Can develop formulas to validate data
• Minimal learning curve
• Limited additional data transformation is available by manually moving
columns
• Data can be filtered and sorted. This provides an easy approach to identifying
duplicate parts and documents as well as producing a list of items to add to
Resource bundles.
Cons
• Data must be parsed into columns manually (Data - Text To Columns)
• Limited to 65,536 rows of data per worksheet
• Not available when working on non-Windows platforms
• Requires that data be in row and column format. Data consistency.
Pros
• Can be developed with special rules for data validation
• Transforms data from neutral (customer supplied) to load ready format.
• Allows for data segregation
• Unlike manual manipulation involved with the use of a tool such as Excel
where there is the potential for error; the development of custom code has
consistent and explainable outcomes.
• Are transportable and shareable
• Will run on multiple operating systems
Cons
• Tightly coupled to input and output. A change in the input or output will
require a change to the custom code.
• No User Interface to review the data
• Usually requires the use of an additional tool for reviewing data
• Multiple skill sets are required
• After code is developed, the process remains a combination of manual and
programmatic efforts
Pros
• Can be developed with special rules for data validation
• Transforms data from neutral (customer supplied) to load ready format
• Can add rules for data validation
• Ideal for large data amounts
• Presents data in a user interface much like Microsoft Excel
• Formulas and filters can be applied on the fly
• Unlike manual manipulation involved with the use of a tool such as Microsoft
Excel where there is the potential for error; the development of custom code
has consistent and explainable outcomes.
• Transportable and shareable
• Will run on multiple operating systems
• Provides a single source for reviewing and classify (segregating) data
• All load ready data files can be produced at the same time
Cons
• Requires development of customer import and export transforms
• Steep learning curve
• Limited support
• Multiple skill sets are required
This chapter describes the CSV2XML utility required to convert CSV data files to
XML format files, as well as a suggested process for validating the XML files.
Note: Starting with Windchill 7.0, the data loader classes do not support CSV-
formatted data files.
Section Page
About the CSV2XML Utility ..............................................................................3-2
Converting CSV Files for Multibyte Operating Systems....................................3-2
Converting CSV Files to XML Format Files ......................................................3-2
Working with Larger Files ..................................................................................3-3
CSV2XML Arguments........................................................................................3-3
Validating the XML Format................................................................................3-5
Using the CSV2XML Utility and Validating the XML Format 3-1
About the CSV2XML Utility
The CSV2XML is a command line utility that converts CSV formatted object files
to XML. Windchill 6.2.6 load files must be converted into XML format. The
CSV2XML utility only works with files ending in "CSV" and only with files that
have a comma ( , ) as a field separation token.
For files that use a different separator (for example, ~), open the file in a text
editor. Search and replace all commas ( , ) with a special string of characters, (for
example, @#@#@). Without any commas left, replace the existing token with
commas. Save the file and run the CSV2XML conversion utility. Open the new
XML file and search and replace @#@#@ with commas.
Note: The CSV2XML utility does not generate XML for use by the LoadFileSet
utility.
If the -encoding argument is not specified, the CSV2XML utility will use the
operating system’s default encoding. The output XML file will be always in
UTF-8.
PTC recommends that you create your test data using the CSV file format and
then use the conversion utility to convert it to XML as it is easier to create the data
and less error prone in the CSV format.
The syntax for the CSV2XML utility is as follows:
windchill wt.load.util.CSV2XML -input[input file or directory]
-output[output directory] -root[specifies the root directory]
-help -encoding [encoding of the source CSV file]
Note: The Life Cycle Administrator requires that all Life Cycle objects be
stored in the System cabinets. Do not change the System cabinet designation
in the Life Cycle XML load files. Also, the data content in the XML files is
case-sensitive.
Then, for large input files it may be necessary to allocate 512 MB of memory:
java -Xmx512m wt.load.util.CSV2XML -input <file_name>.csv
CSV2XML Arguments
The following table describes the arguments available to the CSV2XML utility.
Argument Description
input [input file name or directory Specifies the file name of the CSV file
name] or the directory where the CSV files
are located.
Its path is relative to root. If a directory
is specified, then all the files in the
directory are converted. If a file is
specified, then only the file is
converted.
This argument is optional.
Using the CSV2XML Utility and Validating the XML Format 3-3
Argument Description
To obtain the validation DTD which is in sync with the runtime DTD, run the
following command:
windchill wt.load.util.UpdateEditDTDUtility
Using the CSV2XML Utility and Validating the XML Format 3-5
Using XML Spy to Validate the Data Files
XMLSPY allows for the orderly display and entry of XML data. Data can be
displayed in standard text form, grid entry, or in browser mode. The following
figure displays a user load file in table format (grid mode). Data can be entered
directly in the grid mode.
2. Click Browse in the dialog box that appears to locate and select the proper
DTD.
3. Click OK to assign the DTD.
This chapter describes the LoadFromFile and LoadFileSet utilities, the available
arguments, and the load file sets that accompany your Windchill installation. It
also reviews the process required to successfully load legacy data, beginning with
a review of the container hierarchy.
Section Page
Overview: Data Loading Utilities .......................................................................4-2
Using the LoadFromFile Utility ..........................................................................4-2
Using the LoadFileSet Utility..............................................................................4-5
About the Load Files ...........................................................................................4-6
Loading Legacy Data ..........................................................................................4-8
Working with Containers ....................................................................................4-9
Loading Legacy Data Using the LoadFromFile and LoadFileSet Utilities 4-1
Overview: Data Loading Utilities
In addition to the demonstration data that PTC provides, you may also have
legacy data that you would to add to the Windchill database. PTC provides two
utilities to load legacy data; LoadFromFile and LoadFileSet.
LoadFromFile is a command line utility that is used to load a single, customized
data file into the Windchill database. LoadFileSet is a command line utility that is
used to load multiple, customized data files into the Windchill database. In both
cases, the data files must reside on the Windchill server.
For information on which object types require the use of the Import/Export
framework, refer to When Do I Use LoadFromFile and Import/Export? earlier in
this guide.
Note: The load utilities only process XML files.The CSV2XML utility is used to
convert CSV files to XML formatted files. If your data is CSV formatted, then
you can reformat it to XML using the CSV2XML utility before loading the data
into the database. Refer to Converting CSV Files to XML Format Files earlier in
this guide for more information.
windchill wt.load.LoadFromFile
-d D:\Dev\LoadfileforTesting\Part.xml
-CONT_PATH \
"/wt.inf.container.OrgContainer=TST/wt.pdmlink.PDMLinkProduct=p
art 4\"
Argument Description
Loading Legacy Data Using the LoadFromFile and LoadFileSet Utilities 4-3
Sample Command Lines
LoadFromFile is executed from the command line to load legacy data into the
Windchill database.
If you specify only the required arguments, then the command would look like:
windchill wt.load.LoadFromFile -d [data file name]
If you specify all the arguments, then the command would look like:
windchill wt.load.LoadFromFile -d [data file name]
-u [user name] -p [user password]
-CONT_PATH [/wt.inf.container.OrgContainer=MyOrg/
wt.inf.library.WTLibrary=MyLibrary]
Note: In the sample command above, you can also load the container into the
organization without the library. The container path for OrgContainer would be:
-CONT_PATH /wt.inf.container.OrgContainer=MyOrg
-CONT_PATH /wt.inf.container.OrgContainer=MyOrg/wt.pdmlink.
PDMLinkProduct=Prod
-CONT_PATH /wt.inf.container.OrgContainer=MyOrg/wt.pdmlink.
WTLibrary=Lib
Note: When loading into a container, there is an implied / hidden folder called
"Default" to load into a folder other than the default, the format
<folder>/Default/Widgets</folder> should be used.
Argument Description
file [load file set path] Specifies the path name of the load file
set.
The file path should be relative to the
load directory; which is
<Windchill>/loadFiles by default.
This argument is required.
Loading Legacy Data Using the LoadFromFile and LoadFileSet Utilities 4-5
About the Load Files
There are two levels of XML files; the load file set and the object file. The load
file set specifies the set of object files and the container to load the data into. The
object file specifies the actual data of the objects to be loaded. The LoadFileSet
utility expects the load file set as input, and the LoadFromFile utility expects
object XML files as input.
/wt.inf.container.OrgContainer=
pjl-qa/wt.projmgmt.admin.Project2=Project14
containerPath
It is recommended that the set of objects to be loaded in a load set should
all reside in a single container. However, you can specify an alternative
containerPath that will cause the object in the current loadFile specification
to go into a different container than the other LoadFile pointers in the set.
This should only be specified under extraordinary circumstances.
</loadFileList>
Loading Legacy Data Using the LoadFromFile and LoadFileSet Utilities 4-7
Sample Object File
This is an example of what an object XML file may look like.
<?xml version="1.0" ?>
<NmLoader>
<csvname>defaultLayout</csvname>
<csvvalue>System Default</csvvalue>
<csvnode>/worklist/layout</csvnode>
<csvuser></csvuser>
<csvcontext></csvcontext>
</csvPrefEntry>
</NmLoader>
Note: In some situations, it is necessary to compile the data from all legacy
systems into one database and then prepare the data for loading to Windchill. In
other cases, it is prudent to load the data from each legacy system independently.
Carefully reviewing sample data from the legacy systems prior to beginning the
load process is extremely important.
Once the sample data set has been loaded successfully, the process to load legacy
data generally includes the following steps:
1. Back up the live Windchill system.
2. Freeze the legacy systems.
3. Extract the final data to be loaded.
Note: The following chapter provides detailed examples of how to load users,
organizations, products, and libraries using the utilities in Step 5.
Note: Once all objects are loaded, you can load the structure separately using the
AssemblyAddLoad tag.
Loading Legacy Data Using the LoadFromFile and LoadFileSet Utilities 4-9
PDM Foundation Hierarchy
Site container
– Organization container
• WCPDM container
◆ Parts
◆ Documents
◆ CAD documents, etc.
In the PDM Foundation hierarchy, there is a single site container and a single
organization container. Within the organization container, there must be the
Windchill PDM container. Business-level objects such as parts, documents, and
CAD documents can be loaded in to the Windchill PDM container.
This chapter describes the procedure for editing the csvmapfile.txt file and for
creating custom load methods.
Section Page
Customizing Loading ..........................................................................................5-2
Modifying Data Files...........................................................................................5-3
Creating New Methods for Loading....................................................................5-5
5-1
Customizing Loading
You can customize loading by either changing the data in the data files provided
in the Windchill\loadFiles directory or by adding new methods to load locally
customized classes.
Changing the data in the files loads local data into existing classes. Refer to
Modifying Data Files below for more information.
The first two fields, class and method, are the keys to the map file. Two keys are
given to each line in the map file to allow multiple definitions for a given actual
class in Windchill. This allows alternate names for one class, multiple functions
for one class, or different input fields.
Both class and method are arbitrary strings. The class value matches the class in
the data file. It is used only to match the map and data files so it can be any string
as long as the same string is used in both. The method value matches the method
variable passed either on the command line or as a parameter to doFileLoad.
The real method field is the fully qualified method name that load calls through
introspection with the values from the data file. The attributes 1 through n match
the text used in the real method. The order in the map file is used to retrieve the
values from the data file.
The following are possible scenarios for modifying the csvmapfile.txt file:
• Data in the data file has the data fields in a different order. Edit the
csvmapfile.txt to make the order the same as the data file. See Modifying Data
Files below.
1. The preceding is only one line but wrapped here for readability. Continuation of lines is not
accepted.
Note: Do not put blank lines in the map file. One or more empty lines between
entries cause the load to fail.
Note: The csvmapfile.dtd file is a partial DTD and therefore cannot be used
for validation. To obtain DTD file for validation, run windchill
wt.load.util.UpdateEditDTDUtility. This will propagate your changes to the
<Windchill>\loadXMLFiles\standardX05.dtd file.
3. Using an XML editor, make changes to the XML data files and validate them
against the DTD obtained in Step 2.windchill
Note: The source code for the LoadDoc and LoadPart loaders is available for
customization. You can create new methods for loading by editing this source
code. Refer to the appropriate section below.
This vector is used to give more informative messages back to the command
line. If the object that was manipulated by this method does not implement
getIdentity or that information would not be useful in a message to the
command line, a string should be added to the return_object instead; for
example:
String msg = "Type = Content Item, Identity =
" + appData.getFileName();
return_objects.addElement(msg);
The first parameter is the string from the map file. The last parameter is used to
indicate if the field is required, not required, or blank okay.
If the field is required or the load should fail, use LoadServerHelper.REQUIRED
and check if the return value is null. LoadServerHelper.getValue generates an
error message if the value is required and the field value is null or an empty string.
If the field is not required and you want the return set to null for no value or empty
strings, use LoadServerHelper.NOT_REQUIRED.
If you want to know the difference between no value being given for a field and an
empty string, use LoadServerHelper.BLANK_OKAY. The blank okay option
returns null if the attribute is not found in the hash table, meaning there could be a
format issue with the map and data files. The blank okay option returns "" if the
attribute is found in the hash table but the value was blank, which, depending on
the attribute, could be okay.
(ContentHolder)LoadServerHelper.getCacheValue(CURRENT_CH_KEY);
This method adds the content file to the document. If you are creating an
object that can hold content (files and URLs) and you want to load multiple
content items on lines following the object in the data file, you must cache the
object using the following constant:
private static String CURRENT_CH_KEY = "Current ContentHolder";
If you want to cache an object for another reason, you must create a unique
key string. It is recommended that you clear your cached object at the
beginning of the next create so that if the create fails, the next operation
depending on it will fail and not corrupt other data. The cache of all objects is
cleared after the last data line is processed.
Note: The cache does not do garbage collection by itself. If you cache lots of
items with different keys and run the method multiple times through a load
file, this will limit the size of your load file because you can run out of
memory.
The load utility creates a transaction block around the processing of each line
of data. If the method returns false, the transaction block is rolled back. If it
returns true, it is committed. The cache is not cleared if the method returns
false.
This chapter provides a detailed example of the data loading process for users,
product objects, and parts. It begins by creating organizations, product containers,
and library containers in a Windchill PDMLink system.
Section Page
Before You Begin................................................................................................6-2
Loading Users......................................................................................................6-5
Creating Organizations........................................................................................6-5
Creating Product Containers ...............................................................................6-6
Creating Library Containers..............................................................................6-12
Loading Product Objects and Parts ...................................................................6-17
Loading Relationships Between Parts and Documents .....................................6-26
Note: If this is not specified, the target container path is assumed to be "Classic".
"Classic" is an out-of-the-box container with a WTLibrary named Windchill
PDM. It holds user data such as parts, documents, etc., in a Windchill PDM
solution.
Tip: Run the utility from the same directory where the data file resides.
<NmLoader>
<csvProductContainer
handler="wt.part.LoadPart.createProductContainer" >
<csvuser></csvuser>
<csvname>TestLoad1</csvname>
<csvnumber>TestLoad1</csvnumber>
<csvdescription>Test</csvdescription>
<csvview></csvview>
<csvsource></csvsource>
<csvdefaultUnit></csvdefaultUnit>
<csvtype></csvtype>
<csvcontainerTemplate>General
Product</csvcontainerTemplate>
</csvProductContainer>
</NmLoader>
b. Use the following command to load this XML file into the organization
called MyOrg:
Note: You must run this command as a user who belongs to the Site
Administrator group.
a. Create an XML file to load the WTProduct that looks similar to the following:
Note: This example will also load a soft attribute called Results.
<?xml version="1.0"?>
<NmLoader>
<csvuser></csvuser>
<csvpartName>TestLoad1</csvpartName>
<csvpartNumber>TestLoad1</csvpartNumber>
<csvtype>separable</csvtype>
<csvsource>make</csvsource>
<csvfolder>/Default</csvfolder>
<csvlifecycle>Default</csvlifecycle>
<csvview></csvview>
<csvteamTemplate></csvteamTemplate>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion></csvversion>
<csviteration></csviteration>
<csvparentContainerPath></csvparentContainerPath>
</csvProduct>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>Results</csvdefinition>
<csvvalue1>MyResults</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
<csvpublishFlag></csvpublishFlag>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTPart></NmLoader>
b. Load this file into the product container we created called TestLoad1:
Note: You must run this command as a user who belongs to the Site
Administrator group.
The product called TestLoad1 should now be visible in the Product tab.
Loading Users
Once converted from the legacy data format, the data file can be processed as is
without additional modification.
Tip: Prior to processing the load file, validate that all users belong to the proper
organization and have an e-mail address. This is done by examining the load file
prior to processing.
Tip: It is important that users have an e-mail address. Without an e-mail address
entry, users will not show up in the Find Users dialog box.
Creating Organizations
The concept of organization is new to Windchill PDMLink so no CSV2XML
conversion is required.
Step 1
To use the proper data from the customer the data file was converted to XML
using the CSV2XML utility. The XML file should resemble the following:
<?xml version="1.0" ?><!DOCTYPE NmLoader SYSTEM "standardX05.dtd">
<NmLoader>
<csvuser></csvuser>
<csvpartName>FINAL ASSEMBLY</csvpartName>
<csvpartNumber>Prod_001</csvpartNumber>
<csvtype>fg</csvtype>
<csvsource>make</csvsource>
<csvfolder>/Prod_001</csvfolder>
<csvlifecycle>Default No Routing</csvlifecycle>
<csvview></csvview>
<csvteamTemplate>System.Default</csvteamTemplate>
<csvlifecyclestate>RELEASED</csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion>-</csvversion>
<csviteration>1</csviteration>
<csvparentContainerPath></csvparentContainerPath>
</csvProduct>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>AttribContr/Productline</csvdefinition>
<csvvalue1>Military</csvvalue1>
<csvvalue2></csvvalue2>
</csvIBAValue>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>AttribContr/Program</csvdefinition>
<csvvalue1>No Program</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>AttribContr/spectype</csvdefinition>
<csvvalue1>No Type</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>AttribContr/Legacyrevision</csvdefinition>
<csvvalue1>*</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>AttribContr/Legacyrevdate</csvdefinition>
<csvvalue1>04-OCT-02</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
</NmLoader>
Step 2
After the data was updated, it was processed using a custom built XSL stylesheet.
<?xml version="1.0" encoding="iso-8859-1"?>
<xsl:stylesheet
version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output
method="xml"
indent="yes"
encoding="iso-8859-1"
doctype-system="standardX05.dtd"
/>
<xsl:template match="/">
<NmLoader>
<xsl:for-each select="//csvProduct">
<csvProductContainer
handler="wt.part.LoadPart.createProductContainer" >
<csvname><xsl:value-of select="csvpartNumber"/>
</csvname>
<csvnumber><xsl:value-of select="csvpartNumber"/>
</csvnumber>
<csvdescription><xsl:value-of select="csvpartName"/>
</csvdescription>
<csvview></csvview>
<csvsource></csvsource>
<csvdefaultUnit></csvdefaultUnit>
<csvtype></csvtype>
<csvcontainerTemplate>General
Product</csvcontainerTemplate>
</csvProductContainer>
</xsl:for-each>
</NmLoader>
</xsl:stylesheet>
Tip: Although not illustrated in this example, XML code may be developed to
test data and validate it. For example, if a value is empty but is really required, a
default value could be substituted.
Step 3
After using the XML code above, a load-ready file is produced. Product
containers belong to an organization, so when calling the load utility, the proper
organization should be specified.
When running the load utility for product containers, a user must be specified. It is
a requirement that this user be a Product Creator for the specified organization. In
addition, they should belong to the Administrator group. This is usually done
manually.
Multiple product containers can exist in one load file.
Tip: When calling wt.load.LoadFromFile in UNIX and the container has a space
in the name, place quotes around the container. In addition be sure to escape the
quotes with a backslash.
Tip: To specify alternate Default values, adjust the product (End Item)
Initialization Rules from the Site Tab within Windchill PDMLink.
echo *************************************************
echo *************************************************
echo *************************************************
Creating Assemblies
Creating assemblies is necessary for creating Products that use parts which may
be common to other products, or may exist independently in other libraries.
The first thing that should be noted is that the <csvAssemblyAddLoad> tag can be
used ONLY with existing parts. In other words, it can only be used to create new
assemblies using existing parts. The <csvAssemblyAddLoad> CANNOT be used
to create a brand new part.
For that purpose, <csvProduct handler="wt.part.LoadPart.beginCreateWTPart" >
must first be used to create the part and then add constituent parts. In this example,
we look at two approaches.
<NmLoader>
<csvAssemblyAddLoad
handler="wt.part.LoadPart.addPartToAssemblyLoad" >
<csvassemblyPartNumber>Part-1</csvassemblyPartNumber>
<csvconstituentPartNumber>ExistingPart-
1</csvconstituentPartNumber>
<csvconstituentPartQty>1</csvconstituentPartQty>
<csvconstituentPartUnit>ea</csvconstituentPartUnit>
<NmLoader>
<csvAssemblyAddLoad
handler="wt.part.LoadPart.addPartToAssemblyLoad" >
<csvassemblyPartNumber>Part-1</csvassemblyPartNumber>
<csvconstituentPartNumber>ExistingPart-
2</csvconstituentPartNumber>
<csvconstituentPartQty>2</csvconstituentPartQty>
<csvconstituentPartUnit>ea</csvconstituentPartUnit>
</csvAssemblyAddLoad>
</NmLoader>
Creating a New Part Called LoadedAssm-1 and Adding Existing Parts as Constituent
Parts
This example assumes the following:
• ExistingPart-1 resides in some other product or library.
• MyProduct-1 exists in the organization, MyOrg.
• The XML data resides in the file, DataFile.xml.
Given the above assumptions, the load can be performed using the following
command:
windchill wt.load.LoadFromFile -d DataFile.xml -u wcadmin -p
wcadmin -CONT_PATH
/wt.inf.container.OrgContainer=MyOrg/wt.pdmlink.PDMLinkProduct=MyP
roduct-1
<NmLoader>
<csvuser></csvuser>
<csvpartName>LoadedAssm-1</csvpartName>
<csvpartNumber>LoadedAssm-1</csvpartNumber>
<csvtype>separable</csvtype>
<csvsource>make</csvsource>
<csvfolder>/Default</csvfolder>
<csvview></csvview>
<csvteamTemplate>System.TestTeamForLoads</csvteamTemplate>
<csvlifecyclestate>INWORK</csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion></csvversion>
<csviteration></csviteration>
<csvparentContainerPath></csvparentContainerPath>
</csvProduct>
<csvpublishFlag></csvpublishFlag>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTPart>
<csvAssemblyAddLoad
handler="wt.part.LoadPart.addPartToAssemblyLoad" >
<csvassemblyPartNumber>LoadedAssm-1</csvassemblyPartNumber>
<csvconstituentPartNumber>ExistingPart-
1</csvconstituentPartNumber>
<csvconstituentPartQty>1</csvconstituentPartQty>
<csvconstituentPartUnit>ea</csvconstituentPartUnit>
</csvAssemblyAddLoad>
</NmLoader>
<NmLoader>
<csvcontainerClass>wt.inf.library.WTLibrary</csvcontainerClass>
<csvcontainerName>Loaded
Library</csvcontainerName>
<csvparentContainerPath></csvparentContainerPath>
<csvcontainerTemplateRef>General
Library</csvcontainerTemplateRef>
<csvbusinessNamespace>false</csvbusinessNamespace>
<csvsharingEnabled>true</csvsharingEnabled>
<csvcreator></csvcreator>
<csvowner></csvowner>
<csvsubscriber>true</csvsubscriber>
<csvconferencingURL></csvconferencingURL>
<csvdescription>Loaded
library</csvdescription>
<csvorganization></csvorganization>
<csvcreatorSelector></csvcreatorSelector>
</csvContainer>
</NmLoader>
echo *************************************************
echo *************************************************
echo *************************************************
<NmLoader>
<csvBeginWTDocument handler="wt.doc.LoadDoc.beginCreateWTDocument"
>
<csvnumber>TD11</csvnumber>
<csvtype>Document</csvtype>
<csvdescription>description text</csvdescription>
<csvdepartment>DESIGN</csvdepartment>
<csvsaveIn>/Default</csvsaveIn>
<csvteamTemplate></csvteamTemplate>
<csvdomain></csvdomain>
<csvlifecycletemplate></csvlifecycletemplate>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion>A</csvversion>
<csviteration>1</csviteration>
</csvBeginWTDocument>
<csvprimarycontenttype>ApplicationData</csvprimarycontenttype>
<csvpath>DGadReq.doc</csvpath>
<csvformat></csvformat>
<csvcontdesc></csvcontdesc>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTDocument></NmLoader>
Note: If you want to load the document in a subfolder called Folder1 in the
library, then you would use the following for the csvSaveIn field of the XML file:
/Default/Folder1
<NmLoader>
<csvBeginWTDocument handler="wt.doc.LoadDoc.beginCreateWTDocument"
>
<csvnumber>TD11</csvnumber>
<csvtype>Document</csvtype>
<csvdescription>description text</csvdescription>
<csvdepartment>DESIGN</csvdepartment>
<csvsaveIn>/Default/Folder1</csvsaveIn>
<csvteamTemplate></csvteamTemplate>
<csvdomain></csvdomain>
<csvlifecycletemplate></csvlifecycletemplate>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion>A</csvversion>
<csviteration>1</csviteration>
</csvBeginWTDocument>
<csvprimarycontenttype>ApplicationData</csvprimarycontenttype>
<csvpath>DGadReq.doc</csvpath>
<csvformat></csvformat>
<csvcontdesc></csvcontdesc>
</csvEndWTDocument>
</NmLoader>
<NmLoader>
<csvBeginWTDocument handler="wt.doc.LoadDoc.beginCreateWTDocument"
>
<csvnumber>NUM:SoftType-01</csvnumber>
<csvtype>Document</csvtype>
<csvdescription>description 1112-002</csvdescription>
<csvdepartment>DESIGN</csvdepartment>
<csvsaveIn>/Default/Folder1</csvsaveIn>
<csvteamTemplate></csvteamTemplate>
<csvdomain></csvdomain>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef>com.ptc.ReferenceDocument|
com.ptc.SubOfRef</csvtypedef>
<csvversion></csvversion>
<csviteration></csviteration>
</csvBeginWTDocument>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>MyAttrs/MBool</csvdefinition>
<csvvalue1>false</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>MyAttrs/MString</csvdefinition>
<csvvalue1>Eleven</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
</csvEndWTDocument>
</NmLoader>
Step 1
Edit the output file of the CSV2XML conversion process used to create the
product containers. Search and replace the beginning and ending tag as identified
above. The result should look similar to:
<?xml version="1.0" ?>
<NmLoader>
<csvuser></csvuser>
<csvpartName>FINAL ASSEMBLY</csvpartName>
<csvpartNumber>88-839-123</csvpartNumber>
<csvtype>fg</csvtype>
<csvsource>make</csvsource>
<csvfolder>/88-839-123</csvfolder>
<csvlifecycle>Default No Routing</csvlifecycle>
<csvview></csvview>
<csvteamTemplate>System.Default</csvteamTemplate>
<csvlifecyclestate>RELEASED</csvlifecyclestate>
<csvtypedef></csvtypedef>
<csviteration>1</csviteration>
<csvparentContainerPath></csvparentContainerPath>
</csvProduct>
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>AttribContr/Productline</csvdefinition>
<csvvalue1>Military</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
<csvendIBAHolder
handler="wt.iba.value.service.LoadValue.endIBAHolder"
/></JProduct>
<csvpartNumber>44-33-111-123</csvpartNumber>
<csvtype>fg</csvtype>
<csvsource>make</csvsource>
……………..
……………..
……………..
……………..
The addition of these tags is an import step to assist in the XML transform needed
to process this single file.
Step 2
This example uses a custom developed XML transform. The following transform
will accept as a Parameter a Product. The transform will filter the main
Product.xml file looking for matches, when one is found, it will be properly
formatted and placed into an output file.
<?xml version="1.0" encoding="iso-8859-1"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<NmLoader>
<xsl:for-each select="//JProduct">
<xsl:if test="csvProduct/csvfolder=concat('/',
$prod_to_match)">
<csvpartName>
<xsl:value-of select="csvProduct/csvpartNumber"/>
</csvpartName>
<csvpartNumber>
<xsl:value-of select="csvProduct/csvpartNumber"/>
</csvpartNumber>
<csvtype>
<xsl:value-of
select="translate(csvProduct/csvtype,'ABCDEFGHIJKLMNOPQRSTUVWXYZ',
'abcdefghijklmnopqrstuvwxyz')"/>
</csvtype>
<csvsource>
<xsl:value-of select="csvProduct/csvsource"/>
</csvsource>
<csvfolder>/Default</csvfolder>
<csvlifecycle>Default No Routing</csvlifecycle>
<csvview></csvview>
<csvteamTemplate></csvteamTemplate>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion>
<xsl:value-of select="csvProduct/csvversion"/>
</csvversion>
<csviteration>
</csviteration>
<csvparentContainerPath></csvparentContainerPath>
</csvProduct>
<xsl:for-each select="csvIBAValue">
<csvIBAValue
handler="wt.iba.value.service.LoadValue.createIBAValue" >
<csvdefinition>
<xsl:value-of select="csvdefinition"/>
</csvdefinition>
<csvvalue1>
<xsl:value-of select="csvvalue1"/>
</csvvalue1>
<csvvalue2></csvvalue2>
<csvdependency_id></csvdependency_id>
</csvIBAValue>
</xsl:for-each>
<csvpublishFlag></csvpublishFlag>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTPart>
</xsl:if>
</xsl:for-each>
</NmLoader>
</xsl:template>
</xsl:stylesheet>
Tip: Add custom code to check and manipulate data values. The above transform
converts from upper to lower case Part Type value.
cls
The Second batch file (go.bat) executes the transform and produces
output files.
@echo off
Cls
Echo.
Echo -------------------------------------------------------------
-----------
Echo -------------------------------------------------------------
-----------
set _oldclspth=%CLASSPATH%
echo .
echo .
Echo -------------------------------------------------------------
-----------
Echo -------------------------------------------------------------
-----------
REM has a space like "Common Parts" the file thinks there are 2
parms %1 and %2 so:
REM whereas 2000 and XP will remove all leading spaces from %*;
set _in=abc_prods.xml
set _out=toload\%*_data.xml
set _xsl=2products.xsl
echo .
echo .
Echo -------------------------------------------------------------
-----------
Echo -------------------------------------------------------------
-----------
set CLASSPATH=.;D:\xalan\bin;D:\xalan\bin\xalan.jar;D:\xalan\bin\
xercesImpl.jar;D:\xalan\bin\xsltc.jar;%CLASSPATH%;
echo .
echo .
Echo -------------------------------------------------------------
-----------
Echo -------------------------------------------------------------
-----------
echo .
echo .
Echo -------------------------------------------------------------
-----------
set CLASSPATH=%_oldclspth%
echo .
echo .
Tip: Use a Java Class to execute the transform. This allows the use of
parameters. In this case, the parameter passed to the stylesheet is used to filter the
output to just the objects that belong in the specified container path.
<NmLoader>
<csvpartName>TestPart-01</csvpartName>
<csvpartNumber>TestPart-01</csvpartNumber>
<csvtype>separable</csvtype>
<csvsource>make</csvsource>
<csvfolder>/Default</csvfolder>
<csvlifecycle>Basic</csvlifecycle>
<csvview></csvview>
<csvteamTemplate>System.TestTeamForLoads</csvteamTemplate>
<csvlifecyclestate>INWORK</csvlifecyclestate>
<csvtypedef></csvtypedef>
<csviteration></csviteration>
<csvparentContainerPath></csvparentContainerPath>
</csvProduct>
<csvpublishFlag></csvpublishFlag>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTPart>
</NmLoader>
Example 1
In this first example, we want to add document number DOC100 as a reference
document on part PART100, version D, iteration 1, in the Manufacturing view.
The XML should resemble the following:
<?xml version="1.0"?>
<NmLoader>
<csvPartDocReference
handler="wt.part.LoadPart.createPartDocReference">
<csvdocNumber>DOC100</csvdocNumber>
<csvpartNumber>PART100</csvpartNumber>
<csvpartVersion>D</csvpartVersion>
<csvpartIteration>1</csvpartIteration>
<csvpartView>Manufacturing</csvpartView>
</csvPartDocReference>
</NmLoader>
<NmLoader>
<csvPartDocDescribes
handler="wt.part.LoadPart.createPartDocDescribes">
<csvdocNumber>DOC100</csvdocNumber>
<csvdocVersion>A</csvdocVersion>
<csvdocIteration>2</csvdocIteration>
<csvpartNumber>PART100</csvpartNumber>
<csvpartVersion>D</csvpartVersion>
<csvpartIteration>1</csvpartIteration>
<csvpartView>Manufacturing</csvpartView>
</csvPartDocDescribes>
</NmLoader>
Tag Descriptions
As shown in the examples above, the following tags are used by both the
<csvPartDocReference> and <csvPartDocDescribes> tags to create the
referencedBy and describedBy links, respectively.
Below is a description of the tags and whether they have optional or required
values.
• <csvdocNumber>: This tag is for the document number. A value is required.
• <csvdocVersion>: This is the document version. If no value is specified, the
latest version is used.
• <csvdocIteration>: This is the document iteration. If no value is specified, the
latest iteration is used.
• <csvpartNumber>: This is the part number. A value is required.
• <csvpartVersion>: This is the part version. If no value is specified, the latest
version is used.
This chapter provides a detailed example of the data loading process for
documents in a Pro/INTRALINK 8.0 system.
For detailed information on Pro/INTRALINK, refer to the Getting Started with
Pro/INTRALINK 8.0. The most recent version of this guide is available from the
Library page of your Pro/INTRALINK installation.
Section Page
Loading Documents in a Pro/INTRALINK 8.0 System .....................................7-2
Sample Data File for Pro/INTRALINK ..............................................................7-2
Tip: Run the utility from the same directory where the data file resides.
Loading a Library
The command for loading a library to Pro/INTRALINK would resemble the
following:
windchill wt.load.LoadFromFile -d OneDocSet.xml -CONT_PATH \
"/wt.inf.container.OrgContainer=carrie/wt.inf.library.WTLibrary=
VersionTest\"
<NmLoader>
<csvBeginWTDocument handler="wt.doc.LoadDoc.beginCreateWTDocument"
>
<csvname>A1002</csvname>
<csvtitle>title text</csvtitle>
<csvnumber>A1002</csvnumber>
<csvtype>Document</csvtype>
<csvdescription>description text</csvdescription>
<csvdepartment>DESIGN</csvdepartment>
<csvteamTemplate></csvteamTemplate>
<csvdomain></csvdomain>
<csvlifecycletemplate></csvlifecycletemplate>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion>A</csvversion>
<csviteration>1</csviteration>
</csvBeginWTDocument>
<csvprimarycontenttype>ApplicationData</csvprimarycontenttype>
<csvpath>DGadReq.doc</csvpath>
<csvformat></csvformat>
<csvcontdesc></csvcontdesc>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTDocument>
<csvBeginWTDocument handler="wt.doc.LoadDoc.beginCreateWTDocument"
>
<csvname>A1003</csvname>
<csvtitle>title text</csvtitle>
<csvnumber>A1003</csvnumber>
<csvtype>Document</csvtype>
<csvdescription>description text</csvdescription>
<csvdepartment>DESIGN</csvdepartment>
<csvsaveIn>/Default</csvsaveIn>
<csvteamTemplate></csvteamTemplate>
<csvdomain></csvdomain>
<csvlifecycletemplate></csvlifecycletemplate>
<csvlifecyclestate></csvlifecyclestate>
<csvtypedef></csvtypedef>
<csvversion></csvversion>
<csviteration></csviteration>
<csvprimarycontenttype>ApplicationData</csvprimarycontenttype>
<csvpath>DGadReq.doc</csvpath>
<csvformat></csvformat>
<csvcontdesc></csvcontdesc>
<csvparentContainerPath></csvparentContainerPath>
</csvEndWTDocument>
</NmLoader>