DP 1010 DeveloperGuide en
DP 1010 DeveloperGuide en
10.1.0
Developer Guide
Informatica Development Platform Developer Guide
10.1.0
June 2016
© Copyright Informatica LLC 1998, 2018
This software and documentation contain proprietary information of Informatica LLC and are provided under a license agreement containing restrictions on use and
disclosure and are also protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any
form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC. This Software may be protected by U.S. and/or
international Patents and other Patents Pending.
Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as
provided in DFARS 227.7202-1(a) and 227.7702-3(a) (1995), DFARS 252.227-7013©(1)(ii) (OCT 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14 (ALT III),
as applicable.
The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to
us in writing.
Informatica, Informatica Platform, Informatica Data Services, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange,
PowerMart, Metadata Manager, Informatica Data Quality, Informatica Data Explorer, Informatica B2B Data Transformation, Informatica B2B Data Exchange Informatica
On Demand, Informatica Identity Resolution, Informatica Application Information Lifecycle Management, Informatica Complex Event Processing, Ultra Messaging,
Informatica Master Data Management, and Live Data Map are trademarks or registered trademarks of Informatica LLC in the United States and in jurisdictions
throughout the world. All other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights
reserved. Copyright © Sun Microsystems. All rights reserved. Copyright © RSA Security Inc. All Rights Reserved. Copyright © Ordinal Technology Corp. All rights
reserved. Copyright © Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright © Meta
Integration Technology, Inc. All rights reserved. Copyright © Intalio. All rights reserved. Copyright © Oracle. All rights reserved. Copyright © Adobe Systems Incorporated.
All rights reserved. Copyright © DataArt, Inc. All rights reserved. Copyright © ComponentSource. All rights reserved. Copyright © Microsoft Corporation. All rights
reserved. Copyright © Rogue Wave Software, Inc. All rights reserved. Copyright © Teradata Corporation. All rights reserved. Copyright © Yahoo! Inc. All rights reserved.
Copyright © Glyph & Cog, LLC. All rights reserved. Copyright © Thinkmap, Inc. All rights reserved. Copyright © Clearpace Software Limited. All rights reserved. Copyright
© Information Builders, Inc. All rights reserved. Copyright © OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved. Copyright Cleo
Communications, Inc. All rights reserved. Copyright © International Organization for Standardization 1986. All rights reserved. Copyright © ej-technologies GmbH. All
rights reserved. Copyright © Jaspersoft Corporation. All rights reserved. Copyright © International Business Machines Corporation. All rights reserved. Copyright ©
yWorks GmbH. All rights reserved. Copyright © Lucent Technologies. All rights reserved. Copyright (c) University of Toronto. All rights reserved. Copyright © Daniel
Veillard. All rights reserved. Copyright © Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright © MicroQuill Software Publishing, Inc. All rights reserved.
Copyright © PassMark Software Pty Ltd. All rights reserved. Copyright © LogiXML, Inc. All rights reserved. Copyright © 2003-2010 Lorenzi Davide, All rights reserved.
Copyright © Red Hat, Inc. All rights reserved. Copyright © The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright © EMC
Corporation. All rights reserved. Copyright © Flexera Software. All rights reserved. Copyright © Jinfonet Software. All rights reserved. Copyright © Apple Inc. All rights
reserved. Copyright © Telerik Inc. All rights reserved. Copyright © BEA Systems. All rights reserved. Copyright © PDFlib GmbH. All rights reserved. Copyright ©
Orientation in Objects GmbH. All rights reserved. Copyright © Tanuki Software, Ltd. All rights reserved. Copyright © Ricebridge. All rights reserved. Copyright © Sencha,
Inc. All rights reserved. Copyright © Scalable Systems, Inc. All rights reserved. Copyright © jQWidgets. All rights reserved. Copyright © Tableau Software, Inc. All rights
reserved. Copyright© MaxMind, Inc. All Rights Reserved. Copyright © TMate Software s.r.o. All rights reserved. Copyright © MapR Technologies Inc. All rights reserved.
Copyright © Amazon Corporate LLC. All rights reserved. Copyright © Highsoft. All rights reserved. Copyright © Python Software Foundation. All rights reserved.
Copyright © BeOpen.com. All rights reserved. Copyright © CNRI. All rights reserved.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/), and/or other software which is licensed under various
versions of the Apache License (the "License"). You may obtain a copy of these Licenses at http://www.apache.org/licenses/. Unless required by applicable law or
agreed to in writing, software distributed under these Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software
copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under various versions of the GNU Lesser General Public License
Agreement, which may be found at http:// www.gnu.org/licenses/lgpl.html. The materials are provided free of charge by Informatica, "as-is", without warranty of any
kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose.
The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California,
Irvine, and Vanderbilt University, Copyright (©) 1993-2006, all rights reserved.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and
redistribution of this software is subject to terms available at http://www.openssl.org and http://www.openssl.org/source/license.html.
This product includes Curl software which is Copyright 1996-2013, Daniel Stenberg, <[email protected]>. All Rights Reserved. Permissions and limitations regarding this
software are subject to terms available at http://curl.haxx.se/docs/copyright.html. Permission to use, copy, modify, and distribute this software for any purpose with or
without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
The product includes software copyright 2001-2005 (©) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http://www.dom4j.org/ license.html.
The product includes software copyright © 2004-2007, The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to
terms available at http://dojotoolkit.org/license.
This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations
regarding this software are subject to terms available at http://source.icu-project.org/repos/icu/icu/trunk/license.html.
This product includes software copyright © 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at
http:// www.gnu.org/software/ kawa/Software-License.html.
This product includes OSSP UUID software which is Copyright © 2002 Ralf S. Engelschall, Copyright © 2002 The OSSP Project Copyright © 2002 Cable & Wireless
Deutschland. Permissions and limitations regarding this software are subject to terms available at http://www.opensource.org/licenses/mit-license.php.
This product includes software developed by Boost (http://www.boost.org/) or under the Boost software license. Permissions and limitations regarding this software
are subject to terms available at http:/ /www.boost.org/LICENSE_1_0.txt.
This product includes software copyright © 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at
http:// www.pcre.org/license.txt.
This product includes software copyright © 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http:// www.eclipse.org/org/documents/epl-v10.php and at http://www.eclipse.org/org/documents/edl-v10.php.
This product includes software licensed under the terms at http://www.tcl.tk/software/tcltk/license.html, http://www.bosrup.com/web/overlib/?License, http://
www.stlport.org/doc/ license.html, http://asm.ow2.org/license.html, http://www.cryptix.org/LICENSE.TXT, http://hsqldb.org/web/hsqlLicense.html, http://
httpunit.sourceforge.net/doc/ license.html, http://jung.sourceforge.net/license.txt , http://www.gzip.org/zlib/zlib_license.html, http://www.openldap.org/software/
release/license.html, http://www.libssh2.org, http://slf4j.org/license.html, http://www.sente.ch/software/OpenSourceLicense.html, http://fusesource.com/downloads/
license-agreements/fuse-message-broker-v-5-3- license-agreement; http://antlr.org/license.html; http://aopalliance.sourceforge.net/; http://www.bouncycastle.org/
licence.html; http://www.jgraph.com/jgraphdownload.html; http://www.jcraft.com/jsch/LICENSE.txt; http://jotm.objectweb.org/bsd_license.html; . http://www.w3.org/
Consortium/Legal/2002/copyright-software-20021231; http://www.slf4j.org/license.html; http://nanoxml.sourceforge.net/orig/copyright.html; http://www.json.org/
license.html; http://forge.ow2.org/projects/javaservice/, http://www.postgresql.org/about/licence.html, http://www.sqlite.org/copyright.html, http://www.tcl.tk/
software/tcltk/license.html, http://www.jaxen.org/faq.html, http://www.jdom.org/docs/faq.html, http://www.slf4j.org/license.html; http://www.iodbc.org/dataspace/
iodbc/wiki/iODBC/License; http://www.keplerproject.org/md5/license.html; http://www.toedter.com/en/jcalendar/license.html; http://www.edankert.com/bounce/
index.html; http://www.net-snmp.org/about/license.html; http://www.openmdx.org/#FAQ; http://www.php.net/license/3_01.txt; http://srp.stanford.edu/license.txt;
http://www.schneier.com/blowfish.html; http://www.jmock.org/license.html; http://xsom.java.net; http://benalman.com/about/license/; https://github.com/CreateJS/
EaselJS/blob/master/src/easeljs/display/Bitmap.js; http://www.h2database.com/html/license.html#summary; http://jsoncpp.sourceforge.net/LICENSE; http://
jdbc.postgresql.org/license.html; http://protobuf.googlecode.com/svn/trunk/src/google/protobuf/descriptor.proto; https://github.com/rantav/hector/blob/master/
LICENSE; http://web.mit.edu/Kerberos/krb5-current/doc/mitK5license.html; http://jibx.sourceforge.net/jibx-license.html; https://github.com/lyokato/libgeohash/blob/
master/LICENSE; https://github.com/hjiang/jsonxx/blob/master/LICENSE; https://code.google.com/p/lz4/; https://github.com/jedisct1/libsodium/blob/master/
LICENSE; http://one-jar.sourceforge.net/index.php?page=documents&file=license; https://github.com/EsotericSoftware/kryo/blob/master/license.txt; http://www.scala-
lang.org/license.html; https://github.com/tinkerpop/blueprints/blob/master/LICENSE.txt; http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/
intro.html; https://aws.amazon.com/asl/; https://github.com/twbs/bootstrap/blob/master/LICENSE; https://sourceforge.net/p/xmlunit/code/HEAD/tree/trunk/
LICENSE.txt; https://github.com/documentcloud/underscore-contrib/blob/master/LICENSE, and https://github.com/apache/hbase/blob/master/LICENSE.txt.
This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php), the Common Development and
Distribution License (http://www.opensource.org/licenses/cddl1.php) the Common Public License (http://www.opensource.org/licenses/cpl1.0.php), the Sun Binary
Code License Agreement Supplemental License Terms, the BSD License (http:// www.opensource.org/licenses/bsd-license.php), the new BSD License (http://
opensource.org/licenses/BSD-3-Clause), the MIT License (http://www.opensource.org/licenses/mit-license.php), the Artistic License (http://www.opensource.org/
licenses/artistic-license-1.0) and the Initial Developer’s Public License Version 1.0 (http://www.firebirdsql.org/en/initial-developer-s-public-license-version-1-0/).
This product includes software copyright © 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this
software are subject to terms available at http://xstream.codehaus.org/license.html. This product includes software developed by the Indiana University Extreme! Lab.
For further information please visit http://www.extreme.indiana.edu/.
This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject
to terms of the MIT license.
DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation
is subject to change at any time without notice.
NOTICES
This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES
OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH
OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.
4 Table of Contents
Step 5. Set Up the Development Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Defining the Path for the DLL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Selecting a Compiler. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Step 6. Build Server and Client Plug-ins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Compiling the DLL on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Compiling the Shared Library on UNIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Unregistering a PowerExchange Plug-in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Table of Contents 5
MEDOMAIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Objects and Methods in the Java DB adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Reader Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Writer Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Adapter Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Using the Java DB Adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6 Table of Contents
Retrieving Metadata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Calling the Design API Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Installing and Running the Sample Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Setting Up the Runtime Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Running the Sample Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Recompiling the Sample Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Limitations of the Sample Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Table of Contents 7
Preface
The Informatica Development Platform Developer Guide provides information about the APIs and SDKs
available in the Informatica Development Platform and how to use them to develop adapters and plug-ins for
PowerCenter. It provides tutorials and examples you can use when you develop your adapters and plug-ins.
The Developer Guide is written for independent software vendors, consulting organizations, and developers
who want to use the Informatica Development Platform to develop adapters to integrate PowerCenter with
other applications.
This guide assumes you have a working knowledge of PowerCenter and are familiar with application
programming interfaces.
Informatica Resources
Informatica Network
Informatica Network hosts Informatica Global Customer Support, the Informatica Knowledge Base, and other
product resources. To access Informatica Network, visit https://network.informatica.com.
8
To access the Knowledge Base, visit https://kb.informatica.com. If you have questions, comments, or ideas
about the Knowledge Base, contact the Informatica Knowledge Base team at
[email protected].
Informatica Documentation
To get the latest documentation for your product, browse the Informatica Knowledge Base at
https://kb.informatica.com/_layouts/ProductDocumentation/Page/ProductDocumentSearch.aspx.
If you have questions, comments, or ideas about this documentation, contact the Informatica Documentation
team through email at [email protected].
Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional
Services. Developed from the real-world experience of hundreds of data management projects, Informatica
Velocity represents the collective knowledge of our consultants who have worked with organizations from
around the world to plan, develop, deploy, and maintain successful data management solutions.
If you are an Informatica Network member, you can access Informatica Velocity resources at
http://velocity.informatica.com.
If you have questions, comments, or ideas about Informatica Velocity, contact Informatica Professional
Services at [email protected].
Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that augment, extend, or enhance your
Informatica implementations. By leveraging any of the hundreds of solutions from Informatica developers
and partners, you can improve your productivity and speed up time to implementation on your projects. You
can access Informatica Marketplace at https://marketplace.informatica.com.
To find your local Informatica Global Customer Support telephone number, visit the Informatica website at
the following link:
http://www.informatica.com/us/services-and-training/support-services/global-support-centers.
If you are an Informatica Network member, you can use Online Support at http://network.informatica.com.
Preface 9
Chapter 1
• The ability to access and update a variety of data sources in different platforms
• The ability to process data in batches or in real time
• The ability to process data in different formats and transform data from one format to another, including
complex data formats and industry specific data formats
• The ability to apply business rules to the data according to the data format
• The ability to cleanse data to ensure the quality and reliability of information
Application development can become complex and expensive when you add data integration capabilities to
an application. Data issues such as performance and scalability, data quality, and transformation are difficult
to implement. Applications fail if the these complex data issues are not addressed appropriately.
PowerCenter data integration provides application programming interfaces (APIs) that enable you to embed
data integration capabilities in an enterprise application. When you leverage the data processing capabilities
of PowerCenter data integration, you can focus development efforts on application specific components and
develop the application within a shorter time frame.
PowerCenter data integration provides all the data integration capabilities that might be required in an
application. In addition, it provides a highly scalable and highly available environment that can process large
volumes of data. An integrated advanced load balancer ensures optimal distribution of processing load.
PowerCenter data integration also provides an environment that ensures that access to enterprise data is
secure and controlled by the application. It uses a metadata repository that allows you to reuse processing
logic and can be audited to meet governance and compliance standards.
10
Enterprise Application Components
The following figure shows the common logical components of a typical enterprise application:
• User interface logic. Controls the user interface for the application. This component works with the
application logic component to carry out end-user functions and utilities.
• Application logic. Controls the core business rules and processes for the application and determines the
application behavior. The application logic works with the user interface layer to drive user interactions
and control the user interface of the application. It also interfaces with the data processing layer to carry
out the data processing functions of the application.
• Data processing logic. Performs all the data interactions of the application. The data processing logic can
be tightly embedded with the application logic or spread across different layers. The data processing layer
performs the following services:
- Accesses and updates the application data stores and other external data sources on which the
application depends.
- Transforms data into various formats. It can transform data from external data formats to application
specific data formats or from application specific data formats to external data formats.
- Fixes errors in data and verifies the quality of data received from the user interfaces and other data
adapters.
• Application metadata repository. Contains the application metadata that drives application behavior. The
metadata repository can be a database or a set of configuration files. The application data catalog, which
includes descriptions of application and external data structures, is typically stored in the metadata
repository.
• Application data store. Contains the data required by the application and is stored in a relational
database, XML files, or other types of data storage or format. The data processing logic accesses the
application data store through SQL, web services, or APIs that provide access to the application data.
• Administrative component. Provides administrative services for the application, including application
setup and configuration, user security and management, data access security, deployment and migration,
and backups.
For an enterprise application to successfully integrate a third-party data integration component into its logic,
the application logic must be able to manage and control the data integration functions on multiple levels.
PowerCenter data integration provides management and control at the following levels:
• Data source access. PowerCenter provides adapters to access common data sources such as relational
databases, flat files, XML, and mainframe data sources. Additionally, PowerCenter data integration
provides a way to extend connectivity to custom data sources or data stores specific to the application.
• Transformation processing. PowerCenter data integration manages the data transformation requirements
of an application. It also provides a way to extend the transformation capabilities of an enterprise
application to include plugs-ins that handle application specific transformations.
• Data processing rules and metadata management. Typically, an application uses configuration files to
control application behavior. PowerCenter data integration provides interfaces to allow the application to
control the data processing logic through configuration files or the user interface. In addition,
PowerCenter provides an interface for the application to correlate application-specific metadata with the
data integration metadata. This permits a single point of maintenance for all the metadata.
• Execution. PowerCenter data integration provides an interface to allow the application to invoke, monitor,
and control the execution of data integration processes. It can capture run-time statistics and provide
reports on the status of the processes. PowerCenter also provides a web service interface that allows the
application to invoke external web services and extend the data integration capabilities with standard web
services technology.
• Security and access control. PowerCenter data integration supports enterprise application security
protocols for network connections and data transmission to ensure security in application and data
access. It establishes application level permissions and restrictions to drive user access to the
application functionality and data.
• Administration. An application must be able to administer data integration functions and processes,
including installation and configuration, user administration, backup, and migration. PowerCenter data
integration provides an interface to allow administration of the data integration process through its
application interface or through an external application.
• Auditing and reporting. PowerCenter data integration provides interfaces to access information about
changes to operational metadata. It provides a reporting interface to operational information and
statistics such as date and time the last data load was performed or the number of rows of data
processed.
The following table summarizes the capabilities provided by PowerCenter through various programming
interfaces:
Extensible data sources PowerExchange API Provides connectivity to custom data sources and data
formats. Available in Java and C++.
Extensible transformation Transformation API Allows you to invoke application specific APIs to extend
processing transformation processing. Available in Java and C.
Application driven data Design API Allows you to control data integration logic in the
integration logic application and to create metadata for PowerCenter
objects without a user interface.
Execution control and - Operations API Allows you to drive execution and monitoring of
monitoring - Command line PowerCenter integration processes through an API, the
interface (pmcmd) command line, or web services.
- Batch Web Services
Security and access control Command line interface Allows you to administer PowerCenter user accounts and
(pmrep) manage application connections.
Administration Command line interface Enables you to administer the data integration metadata,
(pmrep) perform backups, and migrate data integration
components across environments.
PowerCenter Interfaces
This chapter includes the following topics:
You can use the following types of interfaces to embed PowerCenter data integration capabilities in your
enterprise application:
The following application APIs and SDKs comprise the Informatica Development Platform:
15
• Design API. Generate PowerCenter metadata and XML documents containing mappings, sessions, and
workflows.
• Custom Function API. Develop functions written in C and add them to the Expression and Aggregator
transformations.
Installation
You can install the Informatica Development Platform from the following sources:
• Informatica Development Platform installation DVD. Run the Informatica Development Platform installer
to install the PowerCenter APIs and SDKs. You can install all the SDKs or install only the SDKs that you
want to use. To install all the SDKs in one process, select the Complete installation option. To install
specific SDKs, select the Custom installation option.
• Informatica electronic software download site. When you purchase PowerCenter and choose to download
the software, you receive a site link, user ID, and password to access the Informatica electronic software
download site. Follow the instructions in the download site to download the Informatica Development
Platform installation file.
• Informatica Technology Network. If you are a registered user of the Informatica Technology Network, you
can download the Informatica Development Platform installation file from the Informatica Development
Platform page. When you download the file, the Informatica Development Network provides you with a
password. Use this password when you extract the files from the download file.
For more information about running the Informatica Development Platform installer, see the Informatica
Installation and Configuration Guide.
To control data integration processes and manage the repository metadata from your application, use the
following command line programs:
• pmcmd. Use pmcmd to manage workflows. You can use pmcmd to start, stop, schedule, and monitor
workflows. This command enables you to manage the services in the PowerCenter domain from an
external application.
• pmrep. Use pmrep to perform repository administration tasks such as listing repository objects, creating
and editing groups, and restoring and deleting repositories. This command enables you to manage the
PowerCenter repository from an external application.
The PowerCenter installation includes the command line programs. After you install PowerCenter, you can
use the command line programs to manage PowerCenter services and repositories from any machine in the
PowerCenter environment.
Web Services
The Web Services Hub is available in the PC domain. The Web Service Hub is a web service gateway that
allows a client application to use web service standards and protocols to access PowerCenter functionality.
The Web Services Hub enables you to turn PowerCenter workflows into web services. You can manage data
integration processes within the PowerCenter framework through requests to PowerCenter web services.
The Web Services Hub also provides web service operations that allow you to monitor and control
PowerCenter processes and get repository information.
Informatica Development Platform application APIs and SDKs supports backward compatibility.
The following table lists the backward compatibility for each version of the IDP libraries and PowerCenter
servers:
10.1 10.1
Note: Informatica recommends that you use the same version of the IDP libraries and PowerCenter server to
connect.
PowerExchange API
The PowerExchange API includes interfaces to the PowerCenter Client, Integration Service, and the
PowerCenter repository. Use the PowerExchange API to create custom adapters to extend PowerCenter
functionality.
You can modify and extend the PowerCenter functionality in the following ways:
• Adapters for database appliances. Typically, database appliances provide ODBC or JDBC adapters and
provide bulk load and extract utilities. You can use the PowerExchange API to build custom connectors to
seamlessly invoke the bulk load and extract utilities from the data integration processes.
• Adapters for ERP and CRM applications. ERP, CRM, and other custom applications typically provide APIs,
web services, and other interfaces to the application data stores. Some applications may use proprietary
data formats. For other applications, you may not be able to access the data store tables except through
the applications. Use the PowerExchange API to build a connector to the applications and invoke the
application API methods.
• Adapters for messaging middleware. Some enterprises may deploy a messaging middleware to allow
communication between applications. If the messaging middleware does not use standard messaging
protocols such as JMS, you can use the PowerExchange API to build adapters to read and publish
messages for the middleware.
Requirements
When you use the PowerExchange API to develop a plug-in, complete the following requirements:
Repository ID Attributes
Before you develop a plug-in using the PowerExchange API, contact Informatica to obtain the PowerCenter
repository ID attributes for the plug-in. Informatica assigns unique repository ID attributes to each plug-in.
If you develop a plug-in that will not be distributed outside your organization, you can define the repository ID
attributes without contacting Informatica. You can set the repository ID attributes to the test values. When
you distribute the plug-in outside your organization, contact Informatica to get the repository ID attributes.
You cannot use repository ID attributes that conflict with those of another vendor.
Plug-in Metadata
The repository ID attributes is the metadata of the plug-in. Create an XML file to contain the plug-in metadata.
The PowerExchange API installation includes a sample metadata definition file named sdkdemo.xml. You can
use the sdkdemo.xml file as a template to define the metadata for the plug-in.
Metadata Registration
After you create the metadata definition file for the plug-in, register the metadata with the PowerCenter
repository. Use the Administration Console to register the plug-in metadata with each repository where you
plan to use the plug-in.
If you create a plug-in that modifies the PowerCenter Client, you must also register the plug-in metadata with
the client machine. Register the plug-in in the Windows Registry on the client machine so that the Designer
can load the plug-in library file.
Usage
You can use the Transformation API to create transformations that invoke functions in external libraries. Use
the Transformation API to add custom data processing capabilities to PowerCenter, such as geospatial
analytical functions and statistical or mathematical functions. Create custom transformations with functions
that process multiple rows of data or hierarchical data objects.
Operations API
Use the Operations API to issue commands to the Integration Service from a third-party application. You can
use the Operations API to manage the Integration Service and run or monitor workflows from a third-party
application. You can get performance data and monitor the progress of a session as it runs or get details of
workflows and sessions that have completed their runs. For example, you can run and monitor PowerCenter
workflows and tasks using an external scheduler such as HP OpenView or an SNMP system.
Usage
You can use the Operations API to manage PowerCenter workflows and tasks in the following ways:
• Integrate PowerCenter with external schedulers Use the Operations API to add scheduling and
monitoring capabilities to PowerCenter to provide more control over the execution of workflows and
tasks. Likewise, you can use the Operations API to run PowerCenter workflows and tasks from external
schedulers, enterprise monitoring applications, or Business Process Execution Language (BPEL) engines.
Transformation API 19
• Control PowerCenter workflows and tasks from applications. For enterprise applications with embedded
PowerCenter capabilities, use the Operations API to manage and run workflows and tasks based on
events and processes completed in the application. You can also use the Operations API to run workflows
and tasks in response to requests made through the application user interface.
• Automate execution of PowerCenter workflows and tasks. Use the Operations API to create programs or
scripts that automate control and execution of PowerCenter workflows and tasks.
The functionalities provided by the Operations API are also available through the PowerCenter command line
programs. You can also implement a subset of the functionalities through web services.
Design API
Use the Design API to create metadata for PowerCenter objects without a user interface. Create, read, and
write objects in the PowerCenter repository, including sources, targets, transformations, mappings, sessions,
and workflows. You can use the Design API to build PowerCenter mappings and workflows without using the
PowerCenter Client tools. This allows you to use a custom application to build PowerCenter metadata or to
build PowerCenter metadata based on metadata from other applications. You can also use the Design API to
access PowerCenter objects from a user interface that matches the look and feel of another application.
Usage
You can use the Design API to read and write metadata in the PowerCenter repository in the following ways:
• Create PowerCenter design objects from a custom interface. Applications with embedded PowerCenter
data integration capabilities often require that the user interface that calls PowerCenter processes match
the user interface of the rest of the application. This provides a consistent user interface to end users.
You can develop a user interface with the look and feel of the application and use the Design API to read
and write PowerCenter metadata from the new user interface. For example, a CRM application with
embedded PowerCenter data integration capabilities needs to generate the data integration logic without
using the PowerCenter Client tools. You can use the Design API to programmatically generate the data
integration logic for workflows and tasks and the runtime configuration objects required to run the
workflows and tasks.
• Administer PowerCenter mappings, transformations and workflows from an application. Use the Design
API to access objects in the PowerCenter repository and enable monitoring and reporting from an external
administrative application.
• Build add-on utilities for PowerCenter. Use the Design API to build utilities such as mapping generators
or test generators to increase user productivity. For example, you can use the Design API to generate
multiple mappings based on user input and speed up mapping development.
You can include custom functions in PowerCenter expressions that you add to a transformation.
Developing a PowerExchange
Adapter
This chapter includes the following topics:
An adapter can consist of one or more plug-ins, including a server plug-in and a client plug-in. When you use
the PowerExchange API to develop a PowerCenter adapter for distribution, each plug-in that is part the
adapter must have a unique identifier to distinguish the adapter from other PowerCenter adapters. Contact
Informatica to obtain a unique identifier for your adapter. A plug-in must also have an associated plug-in
definition file that contains the unique identifier assigned to the plug-in and other properties of the plug-in.
Use the plug-in definition file to register the plug-in with a PowerCenter repository.
This chapter discusses the steps to develop a PowerCenter plug-in with the PowerExchange API.
22
Step 1. Get the PowerCenter Repository ID Attributes
Before you develop a plug-in, email Informatica at [email protected] to get the PowerCenter
repository ID attributes for the plug-in. Informatica assigns unique repository ID attributes to each
PowerCenter plug-in. Use these repository ID attributes to identify the plug-in when you define the metadata
for the plug-in.
If you develop a plug-in to be distributed only within your organization, you can define the repository ID
attributes without contacting Informatica. You can use temporary test values for the attributes. For example,
you develop and test a plug-in before using it in a production environment. You can set the repository ID
attributes to the test values listed in “Step 1. Get the PowerCenter Repository ID Attributes” on page 23. If you
develop multiple plug-ins, each plug-in must have unique repository ID attribute values.
The following table describes the repository ID attributes that define a plug-in:
Vendor ID Identifies the vendor that developed the If you do not have a vendor ID, use 2001.
plug-in. This value corresponds to the
VENDORID attribute for the PLUGIN
element in the plug-in definition file.
dbType Identifies the database type for the Use any value from 200,000 to 299,900, in
application. This value corresponds to the increments of 100.
ID attribute for the DBTYPE element in the
plug-in definition file.
Datatype range Provides a range of IDs that the vendor can Use any value from the range dbType to
associate with each datatype for the dbType + 99, in increments of 1. For example,
database type. You can use the values in if you set the dbType repository ID attribute to
this range in the ID attribute for the 250,000, you can use any value from 250,000
DATATYPE element in the plug-in definition to 250,099 for the datatype.
file.
Extension subtype Associates an ID with each reader and Use any value from the range dbType to
writer. This value corresponds to the dbType + 99, in increments of 1. For example,
EXTENSIONSUBTYPE attribute for the if you set dbType to 250,000, you can use any
EXTENSION element in the plug-in value from 250,000 to 250,099 for subtype.
definition file.
Connection subtype Associates an ID with the PowerCenter Use any value from the range dbType to
connection to the third-party application. dbType + 99, in increments of 1. For example,
This value corresponds to the if you set dbType to 250,000, you can use any
CONNECTIONSUBTYPE attribute for the value from 250,000 to 250,099 for subtype.
CONNECTION element in the plug-in
definition file.
Metadata extension Groups metadata extensions into one Use any value from the range dbType to
domain ID domain. This value corresponds to the ID dbType + 99, in increments of 1. For example,
attribute for the MEDOMAIN element in the if you set dbType to 250,000, you can use any
plug-in definition file. value from 250,000 to 250,099 for the domain
ID.
Note: It is important that you obtain globally unique repository ID attributes from Informatica for your plug-in
if it will be distributed outside your organization. Repository ID attributes are invalid if they conflict with those
of another vendor. Invalid repository ID attributes will make your plug-in components unusable.
Create an XML file that contains the repository ID attributes defined for the plug-in. Give the XML file a name
to associate with your plug-in. You can create an XML file called <PluginName>.xml and add the repository ID
attributes for the plug-in as elements in the XML file.
The PowerExchange API installation includes a sample plug-in definition file named sdkdemo.xml. The
sdkdemo.xml file includes the elements and attributes that are required to define a database type. You can
use the sdkdemo.xml file as a template to set up the database type definition for your plug-in. The
sdkdemo.xml is installed in the following directory:
<PowerExchangeAPIInstallDir>/samples
When you register a plug-in definition in the PowerCenter repository, PowerCenter uses a Document Type
Definition (DTD) file called plugin.dtd to validate the XML file. The PowerCenter installation includes the
plugin.dtd file, installed in the PowerCenter Client directory. The plugin.dtd file defines the elements and
attributes you can use in the XML file. When you create or modify the XML file for your plug-in, verify that it
conforms to the structure of the plugin.dtd file.
When you register a plug-in with a repository, the Repository Service must be running in exclusive mode.
1. In the Navigator of the PowerCenter Administration Console, select the Repository Service to which you
want to add the plug-in.
2. Run the Repository Service in exclusive mode.
3. Click the Plug-ins tab.
4. Click the link to register a Repository Service plug-in.
5. On the Register Plugin for <RepositoryService> page, click Browse to locate the plug-in file.
6. If the plug-in was registered previously and you want to overwrite the registration, select the option to
update the existing plug-in registration.
7. Enter your repository user name and password.
8. Click OK.
The Repository Service registers the plug-in with the repository. The results of the registration operation
appear in the activity log.
9. Run the Repository Service in normal mode.
Use a REG file to register a plug-in with the Windows Registry. The PowerExchange API installation includes a
sample REG file named sdk.reg. You can use the sdk.reg file as a template to create a REG file to register a
client plug-in. The sdk.reg file adds an entry for the SDKDemo plug-in in the Windows Registry.
You can also register the plug-in manually. For example, to register the sdkdemocli.dll client plug-in, set
“SDKDEMO”=“sdkdemocli.dll” at the following location in the Windows Registry:
HKEY_LOCAL_MACHINE\SOFTWARE\Informatica\PowerCenter Client Tools\<Version>\PlugIns
\<VendorName>
Note: You only need to register PowerCenter Client plug-ins in the Windows Registry. You do not need to
register plug-ins for the Integration Service in the Windows Registry.
When you develop a plug-in in Java, set the CLASSPATH environment variable to include the absolute path of
the folder where the pmserversdk.jar file is located. By default, the pmserversdk.jar file is located in the
following directory:
<IDPInstallDir>/<PWXAPIInstallDir>/javalib
When you develop a plug-in in C++, set the path in one of the following environment variables, based on your
development platform:
Windows PATH
AIX LIBPATH
When you develop a plug-in in Java, use the compiler for the version of Java installed with PowerCenter.
The following table describes the Java compiler you can use based on the PowerCenter version:
When you develop a plug-in in C++, use a compiler based on the operating system on which PowerCenter is
installed.
The following table describes the C++ compiler you can use based on the operating system:
Linux RedHat x86 or x64 gcc version 4.1.2 20070213 (Red Hat 5.3)
Linux RedHat zSeries gcc version 4.1.2 20080704 (RedHat 5.3)
To modify the PowerCenter Designer interface to support your application source or target, build a Client
plug-in. The Designer runs only on Windows, so you can build Client plug-ins only on Windows.
5. Click Tools-Options.
1. Add the following statement near the beginning of the plug-in source code to specify a Sleep call of 10
seconds:
sleep (10)
2. Build the plug-in in debug mode.
3. Start a PowerCenter session and attach the debugger to the PmDTM process.
For more information about attaching a debugger to the PmDTM process, see the integrated
development environment (IDE) documentation.
4. Set a breakpoint immediately after the sleep call.
If the Repository Service is not running in exclusive mode, the Remove buttons for plug-ins are disabled.
Verify that all users are disconnected from the repository before you unregister a plug-in.
Note: If you unregistering a plug-in, objects you define with the plug-in can become unusable.
1. In the Navigator of the PowerCenter Administration Console, select the Repository Service from which
you want to remove the plug-in.
2. Run the Repository Service in exclusive mode.
3. Click the Plug-ins tab.
The list of registered plug-ins appears.
4. Click the Remove button for the plug-in you want to unregister.
5. Enter a repository user name and password.
The user must be the Administrator.
6. Click OK.
7. Run the Repository Service in normal mode.
Plug-in Metadata
This chapter includes the following topics:
The root element of the XML file is POWERMART, which includes the REPOSITORY element. In the
REPOSITORY element, you use the PLUGIN element to define the properties of the plug-in.
After you create the plug-in definition file, register the plug-in with a PowerCenter repository. You can use the
Administration Console to register, update, or uninstall a plug-in from a repository.
You can also use the pmrep RegisterPlugin command to register or update the metadata definition with the
PowerCenter repository. Use the pmrep UnregisterPlugin command to uninstall the plug-in from the
PowerCenter repository.
30
The following element hierarchy shows the structure of the plugin.dtd:
When you create or modify the plug-in definition file, verify that it uses the structure of the plugin.dtd file. For
example, the plugin.dtd file specifies that a session extension must either be a READER or a WRITER. The
extension is invalid if you specify an extension type of BOTH.
PLUGIN Element
In the XML file, you need to define a REPOSITORY element in the root element POWERMART. The DTD file
requires these elements for validation.
PLUGIN Element 31
The DTD file requires the root element POWERMART with the child element REPOSITORY. Add a PLUGIN
element as a child of the REPOSITORY element. Use the PLUGIN element to define the metadata for the plug-
in that you create. The attributes for the PLUGIN element uniquely identify the plug-in.
Note: The REPOSITORY element has a CODEPAGE attribute. Set this attribute to US-ASCII so that the plug-in
will work with all Repository Services that use ASCII compatible code pages.
ID Required Identifier for the plug-in. Use the ID attribute to distinguish plug-ins with
identical VENDORID. For example, you develop multiple plug-ins for the same
vendor. Use the same VENDORID but assign a unique ID for each plug-in.
VERSION Required Version of the plug-in. Use this attribute to keep track of updates to the plug-in.
After defining an identity for the plug-in, use the child elements of the PLUGIN element to define other
properties of the plug-in. For example, the plug-in can extract data from TIBCO Rendezvous. Use the child
elements of the PLUGIN element to identify the plug-in as a TIBCO reader that uses a specified TIBCO
connection. The PLUGIN element has the following child elements:
• DBTYPE
• EXTENSION
• CONNECTION
• DBTYPETOEXTENSION
• CONNECTIONTOEXTENSION
• MEDOMAIN
DBTYPE Element
Use the DBTYPE element to define the metadata for the plug-in. The attributes of the DBTYPE element
uniquely identify the database type of the plug-in.
NAME Required Name of the third-party database that you want to define for the
plug-in.
ID Required Identifier for the database type obtained from Informatica. This
attribute identifies this DBTYPE.
BASEID Required Base ID for the datatypes that can be used with this DBTYPE. Use
the lowest value from the datatype range obtained from
Informatica.
DEFAULTDBSUBTYPE Required Identifier for the default subtype for this DBTYPE. For example,
Siebel table and Siebel business component are subtypes of the
Siebel DBTYPE. When you create a Siebel source, the Designer
creates a Siebel table by default. If you do not want to specify a
DBSUBTYPE, set this attribute to 0.
FIELDSEPARATOR Optional Character to use to separate field names from table names in this
DBTYPE. For example, SAP uses a “-” (hyphen) to separate a field
name from its table name.
INVALIDCHARS Optional Use this attribute to specify characters that cannot be used in
table, field, transformation, or port names. For example, if the
$ and & characters are invalid, set the value of this attribute to
“$&”. The PowerExchange API framework uses this attribute to
perform validation.
INVALIDFIRSTCHARS Optional Use this attribute to specify characters that cannot be used as the
first character in table, field, transformation, or port names. For
example, if the @ and # characters are invalid as first characters,
set the value of this attribute to “@#”. The PowerExchange API
framework uses this attribute to perform validation.
TYPE Required Type of PowerCenter object to associate with this DBTYPE. You
can set this attribute to one of the following values:
- SOURCE
- TARGET
- BOTH
COMPONENTVERSION Required Version of this DBTYPE. Indicates that the attributes of the
DBTYPE have changed since the previous version. Use this
attribute to keep track of updates to the DBTYPE element.
Update this attribute only when the DBTYPE has changed. This
attribute does not depend on the version of the plug-in.
DATETIMEFORMAT Optional Date and time format to use with this DBTYPE.
HASGROUPS Optional Indicates whether fields for this DBTYPE can be grouped.
Set to YES to enable groups for fields in an object with this
DBTYPE. Set to NO to disable groups.
DBTYPE Element 33
Attribute Required/ Description
Optional
HASFIELDATTRS Optional Indicates whether fields of this DBTYPE can have attributes.
Set to YES to enable attributes for fields in an object with this
DBTYPE. Set to NO to disable attributes.
If you set this attribute to NO, you cannot include a FIELDATTR
child element for this DBTYPE.
HASKEYTYPE Optional Indicates whether this DBTYPE can have key types. Set to YES to
enable key types for this DBTYPE and display columns for keys in
the Designer. Set to NO to disable key types. If you set this
attribute to NO, this DBTYPE cannot use any key.
HASNULLTYPE Optional Indicates whether this DBTYPE can have NULL fields. Set to YES
to enable NULL assignment for fields in an object with this
DBTYPE. Set to NO to disable NULL fields.
HASBUSINESSNAME Optional Indicates whether fields for this DBTYPE can have business
names. Set to YES to enable business names for fields in an
object with this DBTYPE. Set to NO to disable business names.
HASFLATFILE Optional Indicates whether to display flat file information for sources of
this DBTYPE. Set to YES to display flat file information. Set to NO
to disable flat file display.
EDITGROUPS Optional Indicates whether groups in this DBTYPE can be edited. Set to YES
to enable editing of groups for fields in an object that uses this
DBTYPE. Set to NO to disable editing of groups.
EDITFIELDATTRS Optional Indicates whether field attributes for this DBTYPE can be edited.
Set to YES to enable editing of field attributes in an object that
uses this DBTYPE. Set to NO to disable editing of field attributes.
EDITFIELDNAME Optional Indicates whether field names for this DBTYPE can be edited. Set
to YES to enable editing of field names in an object that uses this
DBTYPE. Set to NO to disable editing of field names
EDITDATATYPE Optional Indicates whether datatypes for this DBTYPE can be edited. Set to
YES to enable editing of datatypes in an object that uses this
DBTYPE. Set to NO to disable editing of datatypes.
EDITPRECISION Optional Indicates whether datatype precision for this DBTYPE can be
edited. Set to YES to enable editing of datatype precision in an
object that uses this DBTYPE. Set to NO to disable editing of
datatype precision.
EDITSCALE Optional Indicates whether datatype scales for this DBTYPE can be edited.
Set to YES to enable editing of datatype scales in an object that
uses this DBTYPE. Set to NO to disable editing of datatype scales.
EDITKEYTYPE Optional Indicates whether key types for this DBTYPE can be edited.
Set to YES to enable editing of key types in an object that uses
this DBTYPE. Set to NO to disable editing of key types.
EDITNULLTYPE Optional Indicates whether null fields for this DBTYPE can be edited. Set to
YES to enable editing of NULL fields in an object that uses this
DBTYPE. Set to NO to disable editing of NULL fields.
EDITBUSINESSNAME Optional Indicates whether business names for fields in this DBTYPE can
be edited. Set to YES to enable editing of business names for
fields in an object that uses this DBTYPE. Set to NO to disable
editing of business names.
EDITFLATFILE Optional Indicates whether the information for flat files created from this
DBTYPE can be edited. Set to YES to enable editing of flat file
information. Set to NO to disable editing of flat file information.
ISRELATIONAL Required Indicates whether this DBTYPE is relational. Set to YES if the
DBTYPE is relational. Set to NO to specify the DBTYPE as non-
relational.
CANPREVIEWDATA Optional Set to YES to enable data preview for this DBTYPE. Set to NO to
disable data preview.
FIXEDFIELDS Optional Set to YES to prevent editing or adding of fields to this DBTYPE.
Set to NO to allow editing or adding fields.
CANCHANGEDBTYPETO Optional Set to YES to enable other DBTYPEs to change into this DBTYPE.
Set to NO to disable other DBTYPEs from changing into this
DBTYPE.
CANCHANGEDBTYPEFROM Optional Set to YES to enable this DBTYPE to change into other DBTYPEs.
Set to NO to disable this DBTYPE from changing into other
DBTYPEs.
CANCOPYFIELDSTO Optional Set to YES to enable copying fields to a source or target of this
DBTYPE. Set to NO to disable copying fields into a source or
target of this DBTYPE.
CANCOPYFIELDSFROM Optional Set to YES to enable copying fields from a source or target of this
DBTYPE. Set to NO to disable copying fields from a source or
target of this DBTYPE.
CANLINKFIELDSFROM Optional Set to YES to enable fields to link from an object of this DBTYPE.
Set to NO to disable fields from linking from an object of this
DBTYPE.
CANLINKFIELDSTO Optional Set to YES to enable fields to create primary key/foreign key links
to an object of this DBTYPE. Set to NO to disable key fields from
linking to an object of this DBTYPE.
CANBECREATED Optional Set to YES to enable the Designer to create sources and targets of
this DBTYPE. Set to NO to disable the Designer from creating
sources and targets of this DBTYPE.
CANADDNEWSOURCEFIELD Optional Set to YES to enable the addition of new source fields in the
Source Analyzer. Set to NO to disable the addition of source fields
in the Source Analyzer.
CANADDNEWTARGETFIELD Optional Set to YES to enable the addition of new target fields in the Target
Designer. Set to NO to disable the addition of target fields in the
Target Designer
DBTYPE Element 35
The DBTYPE element has the following child elements:
• DBSUBTYPE
• KEYTYPE
• DATATYPE
• FIELDATTR
• DBTYPETOWIDGETATTR
• LIBRARY
DBSUBTYPE Element
Use the DBSUBTYPE element to define subtypes of the plug-in database. For example, you have a plug-in that
can run on either Oracle or Microsoft SQL Server. Use the DBSUBTYPE element to define subtypes of each
database.
If you define the DBSUBTYPE element differently from the DBTYPE element, the definition of the DBSUBTYPE
element overrides the definition of the DBTYPE element. For example, the plug-in definition file defines a
DBTYPE element that allows business names and a DBSUBTYPE element that disables the business names.
When you create a source with the DBSUBTYPE, the object will not include business names.
HASGROUPS Optional Indicates whether fields for this DBSUBTYPE can be grouped. Set
to YES to enable groups for fields in an object with this
DBSUBTYPE. Set to NO to disable groups.
HASFIELDATTRS Optional Indicates whether fields of this DBSUBTYPE can have attributes.
Set to YES to enable attributes for fields in an object with this
DBSUBTYPE. Set to NO to disable attributes.
If you set this attribute to NO, you cannot include a FIELDATTR
child element for this DBSUBTYPE.
HASKEYTYPE Optional Indicates whether this DBSUBTYPE can have key types. Set to YES
to enable key types for this DBSUBTYPE and display columns for
keys in the Designer. Set to NO to disable key types. If you set
this attribute to NO, this DBSUBTYPE cannot use any key.
HASNULLTYPE Optional Indicates whether this DBSUBTYPE can have NULL fields. Set to
YES to enable NULL assignment for fields in an object with this
DBSUBTYPE. Set to NO to disable NULL fields.
HASBUSINESSNAME Optional Indicates whether fields for this DBSUBTYPE can have business
names. Set to YES to enable business names for fields in an
object with this DBSUBTYPE. Set to NO to disable business
names.
HASFLATFILE Optional Indicates whether flat files can be created with this DBSUBTYPE.
Set to YES to enable the creation of flat files with this
DBSUBTYPE. Set NO to disable flat file creation.
EDITGROUPS Optional Indicates whether groups in this DBSUBTYPE can be edited. Set to
YES to enable editing of groups for fields in an object that uses
this DBSUBTYPE. Set to NO to disable editing of groups.
EDITFIELDATTRS Optional Indicates whether field attributes for this DBSUBTYPE can be
edited. Set to YES to enable editing of field attributes in an object
that uses this DBSUBTYPE. Set to NO to disable editing of field
attributes.
EDITFIELDNAME Optional Indicates whether field names for this DBSUBTYPE can be edited.
Set to YES to enable editing of field names in an object that uses
this DBSUBTYPE. Set to NO to disable editing of field names
EDITDATATYPE Optional Indicates whether datatypes for this DBSUBTYPE can be edited.
Set to YES to enable editing of datatypes in an object that uses
this DBSUBTYPE. Set to NO to disable editing of datatypes.
EDITPRECISION Optional Indicates whether datatype precision for this DBSUBTYPE can be
edited. Set to YES to enable editing of datatype precision in an
object that uses this DBSUBTYPE. Set to NO to disable editing of
datatype precision.
EDITSCALE Optional Indicates whether datatype scales for this DBSUBTYPE can be
edited. Set to YES to enable editing of datatype scales in an
object that uses this DBSUBTYPE. Set to NO to disable editing of
datatype scales.
EDITKEYTYPE Optional Indicates whether key types for this DBSUBTYPE can be edited.
Set to YES to enable editing of key types in an object that uses
this DBSUBTYPE. Set to NO to disable editing of key types.
EDITNULLTYPE Optional Indicates whether null fields for this DBSUBTYPE can be edited.
Set to YES to enable editing of NULL fields in an object that uses
this DBSUBTYPE. Set to NO to disable editing of NULL fields.
EDITBUSINESSNAME Optional Indicates whether business names for fields in this DBSUBTYPE
can be edited. Set to YES to enable editing of business names for
fields in an object that uses this DBSUBTYPE. Set to NO to disable
editing of business names.
EDITFLATFILE Optional Indicates whether the information for flat files created from this
DBSUBTYPE can be edited. Set to YES to enable editing of flat file
information. Set to NO to disable editing of flat file information.
ISRELATIONAL Required Indicates whether this DBSUBTYPE is relational. Set to YES if the
DBSUBTYPE is relational. Set to NO to specify the DBSUBTYPE as
non-relational.
DBTYPE Element 37
Attribute Required/ Description
Optional
CANPREVIEWDATA Optional Set to YES to enable data preview for this DBSUBTYPE. Set to NO
to disable data preview.
CANCHANGEDBTYPETO Optional Set to YES to enable other DBSUBTYPEs to change into this
DBSUBTYPE. Set to NO to disable other DBSUBTYPEs from
changing into this DBSUBTYPE.
CANCHANGEDBTYPEFROM Optional Set to YES to enable this DBSUBTYPE to change into other
DBSUBTYPEs. Set to NO to disable this DBSUBTYPE from
changing into other DBSUBTYPEs.
CANCOPYFIELDSTO Optional Set to YES to enable copying fields to a source or target of this
DBSUBTYPE. Set to NO to disable copying fields into a source or
target of this DBSUBTYPE.
CANCOPYFIELDSFROM Optional Set to YES to enable copying fields from a source or target of this
DBSUBTYPE. Set to NO to disable copying fields from a source or
target of this DBSUBTYPE.
CANLINKFIELDSFROM Optional Set to YES to enable fields to link from an object of this
DBSUBTYPE. Set to NO to disable fields from linking from an
object of this DBSUBTYPE.
CANLINKFIELDSTO Optional Set to YES to enable fields to create primary key/foreign key links
to an object of this DBSUBTYPE. Set to NO to disable key fields
from linking to an object of this DBSUBTYPE.
CANBECREATED Optional Set to YES to enable the Designer to create sources and targets of
this DBSUBTYPE. Set to NO to disable the Designer from creating
sources and targets of this DBSUBTYPE.
CANADDNEWSOURCEFIELD Optional Set to YES to enable the addition of new source fields in the
Source Analyzer. Set to NO to disable the addition of source fields
in the Source Analyzer.
CANADDNEWTARGETFIELD Optional Set to YES to enable the addition of new target fields in the Target
Designer. Set to NO to disable the addition of target fields in the
Target Designer
• FIELDATTR
• DBTYPETOWIDGETATTR
KEYTYPE Element
Use the KEYTYPE element to define a key for the DBTYPE. The key can be a primary key, foreign key, or a new
type of key that you define.
TABLETYPE Required Set to SOURCE if this type of key will be used for sources. Set to TARGET if this
type of key will be used for targets. To use a key type for sources and targets,
define a key type for sources and another for targets.
KEYTYPE Required Set to PRIMARY to create a primary key type. Set to FOREIGN to create a foreign
key type. Set to CUSTOM to create a custom key type.
KEYTYPEBIT Optional Decimal value of the key type bits for the key type.
The first eight bits are reserved by Informatica. You can change the first two bits
to indicate a primary or foreign key. Set the first bit to 1 to indicate that the key is
a primary key. Set the second bit to 1 to indicate that the key is a foreign key. You
can set any bit except the first 8 bits to indicate a custom key. For example, to
create a user-defined key in PeopleSoft that is also a primary key, set the first bit
and the ninth bit to 1. The resulting decimal value is 257.
Set this attribute only for custom key types.
DATATYPE Element
Use the DATATYPE element to define datatypes for the DBTYPE.
For example, you want to define a datatype named CBigInt for the DBTYPE. The following sample code shows
the DATATYPE element with the attributes that define the CBigInt datatype:
<DATATYPE NAME="CBigInt" ID="300201" ODBCTYPE="SQL_BIGINT" READONLYPRECISION="10"
READONLYSCALE="0" HASSCALE="YES" CANEDITSCALE="NO" CANEDITPRECISION="NO"
INTERNALCONVERTABLE="NO"/>
The following table describes the attributes of the DATATYPE element:
ID Required Identifier for the datatype. The ID must be within the range of
DATATYPE IDs provided by Informatica.
ODBCTYPE Required ODBC type of this datatype. Define a separate DATATYPE element
for each ODBC type.
READONLYPRECISION Optional Indicates the default precision for this datatype. If the
CANEDITPRECISION attribute for this DATATYPE is set to YES, set
this attribute to “0”.
READONLYSCALE Optional Indicates the default scale for this datatype. If the CANEDITSCALE
attribute for this DATATYPE is set to YES, set this attribute to “0”.
DBTYPE Element 39
Attribute Required/ Description
Optional
HASSCALE Optional Set to YES so this datatype can have a scale. Set to NO to disable
the scale.
CANEDITSCALE Optional Set to YES to allow editing of the datatype scale. Set to NO to
disable editing of the scale.
CANEDITPRECISION Optional Set to YES to allow editing of the datatype precision. Set to NO to
disable editing of the precision.
INTERNALCONVERTABLE Optional Set to YES to internally convert the datatype to another datatype.
The datatype converts to a different datatype that has the same ID
and the INTERNALCONVERTABLE attribute set to NO. Set this
attribute to NO to disable internal conversion of the datatype.
If you set this attribute to YES, define another datatype with the
same ID and the INTERNALCONVERTABLE attribute set to NO.
You must define at least one DATATYPE element for each of the following ODBC types:
• SQL_BIGINT
• SQL_BINARY
• SQL_BIT
• SQL_CHAR
• SQL_DATE
• SQL_DECIMAL
• SQL_DOUBLE
• SQL_FLOAT
• SQL_IDENTITY
• SQL_INTEGER
• SQL_LONGVARBINARY
• SQL_LONGVARCHAR
• SQL_MONEY
• SQL_NUMERIC
• SQL_REAL
• SQL_SMALLINT
• SQL_TIME
• SQL_TIMESTAMP
• SQL_TINYINT
• SQL_WCHAR
• SQL_WVARCHAR
• SQL_WLONGVARCHAR
• SQL_VARBINARY
• SQL_VARCHAR
The following example defines a field named Physical Table Name for the DBTYPE:
<FIELDATTR NAME="Physical Table Name" ID="300200" DESCRIPTION="Physical Table Name"
TYPE="BOTH" ISINT="NO" ISHIDDEN="NO"/>
The following table describes the attributes of the FIELDATTR element:
TYPE Required Type of PowerCenter object to associate with this field attribute. You can set
this attribute to one of the following values:
- SOURCE
- TARGET
- BOTH
ISINT Optional Set to YES if the field attribute is an integer. Set to NO if the field attribute is not
an integer.
ISHIDDEN Optional Set to YES if the field attribute is hidden. Set to NO if the field attribute is not
hidden.
DBTYPETOWIDGETATTR Element
By default, the source or target for a plug-in has pre-defined properties. The values for these properties are
also pre-defined. You can assign default values for these properties. Use the element
DBTYPETOWIDGETATTR element to set the default values for the properties.
Define a DBTYPETOWIDGETATTR element for each property that you want to define default values.
DBTYPE Element 41
The following table describes the pre-defined properties and their possible values:
Load Scope Target Determines when the writer plug-in This property can have one of
loads processed rows into the the following values:
external application. - Row. Loads a row after it is
processed.
- Transaction. Loads all rows
processed in a transaction on
commit.
- All Input. Loads all rows at
end of file.
Default is All Input.
Partial Load Recovery Target Specifies how the target handles a This property can have one of
previous partial load during recovery. the following values:
- None
- Append
- Truncate
Default is None.
OBJECTTYPE Required Type of PowerCenter object to associate with this DBTYPETOWIDGETATTR. You
can set this attribute to one of the following values:
- SOURCE
- TARGET
ISREADONLY Optional Set to YES to make the property read-only. Set to NO if the user can edit the value.
ISDISABLED Optional Set to YES to disable the property. Set to NO enable the property.
ISHIDDEN Optional Set to YES to hide the property in the Designer. Set to NO to display the property
in the Designer.
ISEXPORTED Optional Set to YES if this attribute can be exported. Set to NO if this attribute cannot
exported.
ALLOWALLVALU Optional Set to YES to display all values that can be selected for the property. Set to NO to
ES specify a subset of all values. Define a MULTIVALUEATTRIBUTE Element for each
value to display.
• MULTIVALUEATTRIBUTE
You can also define a MULTIVALUEATTRIBUTE element as a child of an ATTRIBUTE element when you set
the TYPE attribute of an extension or connection ATTRIBUTE element to MULTIVALUED.
NAME Required Enter one of the values of an attribute or property with multiple possible values.
LIBRARY Element
Use the LIBRARY element to specify a library or shared object to associate with the following objects:
• DBTYPE
• EXTENSION
• CONNECTION
• MEDOMAIN
The following example shows the definition of the LIBRARY element for an HTML_WRITER session extension:
<LIBRARY NAME = "wrtplugindll.dll" OSTYPE = "NT" />
DBTYPE Element 43
The following table describes the attributes of the LIBRARY element:
NAME Required Name of the library to associate with a DBTYPE, EXTENSION, CONNECTION, or
MEDOMAIN.
OSTYPE Required Operating system used to develop the library. Set to one of the following operating
systems:
- NT
- SOLARIS
- AIX
- DEC
- Linux
- OS390.
• AUXFILE
EXTENSION Element
Use the EXTENSION element to specify the properties of the session extension. You can define your session
extension as a reader or writer. For example, you can use the following XML code to create an HTML_WRITER
extension:
<EXTENSION NAME= "HTML_WRITER" EXTENSIONTYPE= "WRITER"
COMPONENTVERSION = "1.0.0">
EXTENSIONTYPE Required Type of extension. Set to READER for a reader extension. Set to WRITER
for a writer extension.
EXTENSIONSUBTYPE Required Extension subtype. You can obtain the value for this attribute from
Informatica.
HASFILEINFO Optional Set to YES if the session extension requires a file description. Set to NO
if the session extension does not require a file description.
Note: If you set the DISPLAYFILEINFO attribute to YES, set the
HASFILEINFO attribute to YES.
DISPLAYFILEINFO Optional Set to YES to enable the display of file information for the session
extension. Set to NO to disable the display of file information.
Note: If you set the DISPLAYFILEINFO attribute to YES, set the
HASFILEINFO attribute to YES.
COMPONENTVERSION Required Version of the EXTENSION. Indicates that the attributes of the
EXTENSION have changed since the previous version.Use this attribute
to keep track of updates to the EXTENSION element.
LANG Optional Language in which the plug-in is developed. Set to one of the following
values:
- CPP
- JAVA
Default value is CPP.
• ATTRIBUTE
• LIBRARY
• CLASS
• ALLOWEDDBTYPE
• ALLOWEDTEMPLATE
• CONNECTIONREFERENCE
EXTENSION Element 45
ATTRIBUTE Element
Use the ATTRIBUTE element to define an attribute of the extension or connection that you want to create. For
example, to define a Stylesheet Name attribute for the HTML_WRITER extension, you can use the following
code:
<ATTRIBUTE NAME = "Stylesheet Name" ID = "1" TYPE = "PROPERTY" DATATYPE = "STRING"
REFERENCELEVEL = "TARGET" ISREQUIRED = "YES" ISSESSIONOVERRIDABLE = "YES"
ISINSTANCEOVERRIDABLE = "YES" ISPARTITIONOVERRIDABLE = "YES" ISSESSIONVARSALLOWED =
"YES" ISSERVERVARSALLOWED = "YES" ISVARPREFIXALLOWED = "YES" ISVARFULLNAMEALLOWED =
"YES" VARIABLEPREFIX = "varpfx"/>
The following table describes the attributes of the ATTRIBUTE element:
TYPE Required Type for the attribute. Set to one of the following values:
- SQL
- PROPERTY
- BOOLEAN
- MULTIVALUED
- PASSWORD
- FILENAME
The value for the TYPE attribute determines the value for the
DATATYPE attribute. For a list of the DATATYPE attribute values
that correspond to the TYPE attribute values, see “ATTRIBUTE
Element” on page 46.
DATATYPE Required Set to NUMBER or STRING based on the value of the TYPE
attribute. For more information, see “ATTRIBUTE Element” on page
46.
REFERENCELEVEL Required Transformation level that the attribute applies to. When you define
a reader extension or a reader connection, you can set this
attribute to SOURCE or DSQ. When you define a writer extension or
a writer connection, set this attribute to TARGET.
ISINSTANCEOVERRIDABLE Optional Set to YES to enable overriding reusable session instances. Set to
NO to disable instance overrides.
ISSESSIONVARSALLOWED Optional Set to YES to allow session variables in the attribute. Set to NO to
disable session variables.
ISSERVERVARSALLOWED Optional Set to YES to allow server variables in the attribute. Set to NO to
disable server variables.
ISREQUIRED Optional Set to YES if the attribute is required for the extension or
connection. Set to NO to make the attribute optional for the
extension or connection.
ISVARPREFIXALLOWED Optional Set to YES to enable variable prefixes for the attribute. Set to NO
to disable variable prefixes for the attribute.
ISVARFULLNAMEALLOWED Optional Set to YES to enable variable full names for the attribute. Set to
NO to disable variable full names for the attribute.
GROUPID Optional Identifier for the group to which the extension or connection
attribute belongs. You can assign a number from 1 to16 as group
ID.
GROUPPOLICY Optional Defines the number of attributes that can be in one group. Set to
one of the following values:
- NONE
- EXACTLYONE
- ATMOSTONE.
The following table shows the possible values for the TYPE attribute and the corresponding DATATYPE
values:
Table 1. Values for TYPE and DATATYPE Attributes of the ATTRIBUTE Element
TYPE DATATYPE
BOOLEAN NUMBER
SQL STRING
PASSWORD STRING
FILENAME STRING
EXTENSION Element 47
Use the following guidelines when you set the attributes of the ATTRIBUTE element:
• If you set the ISSESSIONVARALLOWED attribute or the ISSERVERVARALLOWED attribute to YES, you must
enter YES for either the ISVARPREFIXALLOWED attribute or the ISVARFULLNAMEALLOWED attribute. You
cannot set both the ISVARPREFIXALLOWED and ISVARFULLNAMEALLOWED attributes to YES at the same
time.
• You must set the VARIABLEPREFIX attribute to YES when you set the ISVARPREFIXALLOWED attribute or
the ISVARFULLNAMEALLOWED attribute to YES.
• If you define the GROUPPOLICY attribute, you must set the GROUPID attribute. However, you can define
the GROUPID attribute without setting the GROUPPOLICY attribute.
The ATTRIBUTE element has the following child elements:
• MULTIVALUEATTRIBUTE
• ATTRIBUTECATEGORY
ATTRIBUTECATEGORY Element
In the Workflow Manager, attributes are divided into two groups: Memory Properties or Files, Directories and
Commands. Use the ATTRIBUTECATEGORY element to indicate to which group the ATTRIBUTE element
belongs. To indicate that the ATTRIBUTE element belongs to the Memory Properties group, set the value to
MEMORY. To indicate that the ATTRIBUTE element belongs to the Files, Directories, and Commands group,
set the value to FILESANDDIRECTORIES.
LIBRARY Element
For more information about the LIBRARY Element, see “LIBRARY Element” on page 43.
CLASS Element
Use the CLASS element to specify the class for the session extension when the library is developed in JAVA.
For example:
<CLASS NAME ="com/informatica/powerconnect/jms/server/reader/JMSReaderPlugin" />
The following table describes the attributes of the CLASS element:
ALLOWEDDBTYPE Element
Use the ALLOWEDDBTYPE element to define the valid DBTYPEs for the session extension. Include an
ALLOWEDDBTYPE element in the EXTENSION element for each DBTYPE you want to use with the extension.
For example, to build a reader extension for a TIBCO Rendezvous source, define a reader extension and use
the ALLOWEDDBTYPE element to associate the TIBCO DBTYPE with the TIBCO reader. You can also use the
ALLOWEDDBTYPE element to make the TIBCO DBTYPE the default database for the TIBCO reader.
DBTYPE Required ID of the DBTYPE you want to use with the session extension. Informatica
provides a list of predefined DBTYPE IDs for known databases. For a list of the
DBTYPE IDs for databases, see “ALLOWEDDBTYPE Element” on page 48.
ISDEFAULT Optional Set to YES if the DBTYPE is the default type for the session extension. Set to
NO if the database type is not the default type.
Database DBTYPE ID
Sybase 2
Oracle 3
Informix 4
IBM DB2 6
Flatfile 7
ODBC 8
XML 12
Teradata 15
ALLOWEDTEMPLATE Element
You can define this element when you define an extension for a Custom transformation. If you define an
ALLOWEDTEMPLATE element for the EXTENSION element, do not define an ALLOWEDDBTYPE element.
CONNECTIONREFERENCE Element
The CONNECTIONREFERENCE element defines the association between a connection and a session
extension.
You can use the CONNECTIONREFERENCE element to create a group of connections to associate with a
session extension. You can group connections used by the session extension for a particular task. For
example, you want to extract data from an SAP R/3 system into PowerCenter. An application connection
extracts data from the SAP R/3 system and creates a flat file in the SAP R/3 system. An FTP connection
transfers this flat file to PowerCenter. Use the CONNECTIONREFERENCE element to define a connection
group for the reader extension that includes the application and FTP connections.
EXTENSION Element 49
The following table describes the attributes of the CONNECTIONREFERENCE element:
REFERENCELEVEL Required Set the object level to which the CONNECTIONREFERENCE applies.
For a reader extension, set this attribute to SOURCE or DSQ. For a
writer extension, set this attribute to TARGET.
ISSESSIONVARSALLOWED Optional Set to YES to allow session variables for the extension. Set to NO
to disable session variables.
ISVARPREFIXALLOWED Optional Set to YES to enable variable prefixes for the extension. Set to NO
to disable variable prefixes for the extension.
ISVARFULLNAMEALLOWED Optional Set to YES to enable variable full names for the extension. Set to
NO to disable variable prefixes for the extension.
ISDATACONNECTION Optional Set to YES to indicate that the extension is a connection to a data
source or target.
• ALLOWEDCONNECTION
ALLOWEDCONNECTION Element
Use the ALLOWEDCONNECTION element to define the connection subtypes that can be used for the
CONNECTIONREFERENCE element. If a CONNECTIONREFERENCE element requires multiple connections, you
can use the ALLOWEDCONNECTION element to define the connections to group together within a
CONNECTIONREFERENCE element. The connections you include in a group must be defined in the
CONNECTION element.
CONNECTIONSUBTYPE Required Identifier for the connection subtype. For a list of predefined
connection subtype IDs, see “ALLOWEDCONNECTION Element” on page
50.
ISDEFAULT Optional Set to YES if the connection is the default connection for the session
extension. Set to NO if the connection is not the default for the session
extension.
If CONNECTIONNUMBER attribute for the parent
CONNECTIONREFERENCE element is set to 1, set this attribute to YES.
SUPPORTPARTITIONS Optional Set to YES if partitions can use the connection. Set to NO if partitions
cannot use the connection.
If you set this attribute to NO, do not set the ISFORALLPARTITIONS
attribute to YES.
ISFORALLPARTITIONS Optional Set to YES if all partitions can use the connection. Set to NO if not all
partitions can use the connection.
The following table shows the predefined IDs for the CONNECTIONSUBTYPE attribute:
EXTENSION Element 51
Connection Subtype Type of Connection Connection Subtype ID
• HIDDENCONNECTIONATTRIBUTETOEXTENSION
HIDDENCONNECTIONATTRIBUTETOEXTENSION Element
Use the HIDDENCONNECTIONATTRIBUTETOEXTENSION element to hide a connection attribute from an
extension.
CNXATTRIBUTEID Required Attribute ID of the connection attribute to hide from a session extension.
CONNECTION Element
Use the CONNECTION element to define a connection for a plug-in. After you register a plug-in with a defined
connection, the connection information appears in the Connection Object Browser in the Workflow Manager.
CONNECTIONSUBTYPE Required Identifier for the connection subtype. For a list of predefined
connection subtype IDs, see “ALLOWEDCONNECTION Element” on page
50.
HASUSERNAME Optional Set to YES if the connection requires a username. Set to NO if the
connection does not require a username.
HASUSERPASSWORD Optional Set to YES if the connection requires a password. Set to NO if the
connection does not require a password.
HASCONNECTSTRING Optional Set to YES if the connection requires a connect string. Set to NO if
the connection does not require a connect string.
HASCODEPAGE Optional Set to YES if the connection has a code page. Set to NO if the
connection does not have a code page.
HASPERMISSIONS Optional Set to YES to display the permission properties in the Designer. Set
to NO to disable the permission properties.
CONNECTION Element 53
Attribute Required/ Description
Optional
COMPONENTVERSION Required Version of the CONNECTION. Indicates that the attributes of the
CONNECTION have changed since the previous version. Use this
attribute to keep track of updates to the V element.
• ATTRIBUTE
• LIBRARY
DBTYPETOEXTENSION Element
When you register a new plug-in in the PowerCenter repository, you can define a relationship between the
DBTYPE and the session extension of another plug-in registered in the repository. For example, a registered
plug-in has a TIBCO_READER extension. When you register a new plug-in that contains a TIBCO_ORDERS
DBTYPE, you can associate the TIBCO_ORDERS DBTYPE with the TIBCO_READER extension. The following
example shows a DBTYPETOEXTENSION element defining the relationship between a DBTYPE and an
extension:
<DBTYPETOEXTENSION EXTENSIONTYPE="READER" EXTENSIONSUBTYPE="300000" DBTYPE="300000"
ISDEFAULT="YES" />
You can also update the metadata of a registered plug-in to associate the session extension with a DBTYPE.
Update the ALLOWEDDBTYPE element of an extension to associate it with a DBTYPE.
EXTENSIONTYPE Required Set to READER if the referenced session extension is a reader. Set to
WRITER if the referenced session extension is a writer.
EXTENSIONSUBTYPE Required Set the session extension subtype for the extension.
DBTYPE Required Set to the ID of the referenced DBTYPE. For a list of predefined DBTYPE
IDs, see “ALLOWEDDBTYPE Element” on page 48.
ISDEFAULT Optional Set to YES if the DBTYPE is the default for the session extension. Set to
NO if the DBTYPE is not the default for the session extension.
EXTENSIONTYPE Required Set to READER if the referenced session extension is a reader. Set to
WRITER if the referenced session extension is a writer.
EXTENSIONSUBTYPE Required Set the session extension subtype for the extension.
CONNECTIONTYPE Required Connection type of the extension. Set to one of the following
connection types:
- RELATIONAL
- APPLICATION
- FTP
- EXTERNALLOADER
- QUEUE.
CONNECTIONSUBTYPE Required Identifier for the connection subtype. For a list of predefined
connection subtype IDs, see “ALLOWEDCONNECTION Element” on page
50.
ISDEFAULT Optional Set to YES if the connection is the default connection for the session
extension. Set to NO if the connection is not the default for the session
extension.
If CONNECTIONNUMBER attribute is set to 1, set this attribute to YES.
CONNECTIONTOEXTENSION Element 55
Attribute Required/ Description
Optional
SUPPORTPARTITIONS Optional Set to YES if partitions can use the connection. Set to NO if partitions
cannot use the connection.
If you set this attribute to NO, do not set the ISFORALLPARTITIONS
attribute to YES.
ISFORALLPARTITIONS Optional Set to YES if all partitions can use the connection. Set to NO if not all
partitions can use the connection.
• HIDDENCONNECTIONATTRIBUTETOEXTENSION
• HIDDENEXTENSIONATTRIBUTETOCONNECTION
HIDDENEXTENSIONATTRIBUTETOCONNECTION Element
Use the HIDDENEXTENSIONATTRIBUTETOCONNECTION element to hide an extension attribute from a
connection associated with a session extension of another plug-in registered in the repository. The
CONNECTIONTOEXTENSION element defines the session extension with the attribute to hide. It also defines
the connection from which to hide the attribute.
MEDOMAIN Element
You can define a metadata extension domain to group metadata extensions. Use the MEDOMAIN element to
define a metadata extension domain. For example, you can create the TIBTARGETS metadata extension
domain for targets and the TIBSOURCES metadata extension domain for sources.
You can use the attributes for the MEDOMAIN element to enable or disable whether clients can view or edit
the metadata extension domain.
ID Required Enter an ID from the range of metadata extension domain IDs obtained
from Informatica.
KEY Optional Enter an encrypted domain key for the metadata extension domain. You
also use this key to access private metadata extensions. Use the
pmpasswd <password> command to encrypt a password.
CLIENTVISIBLE Optional Set to YES to enable the Designer to display the metadata extension
domain. Set to NO to disable the Designer from displaying the
metadata extension domain.
COMPONENTVERSION Required Enter the version of the MEDOMAIN. This allows you to keep track of
updates to the MEDOMAIN element.
The following example shows the MEDOMAIN element defining the TIBTARGETS metadata extension
domain:
<MEDOMAIN NAME="TIBTARGETS" ID = "2" KEY = "KEY" DESCRIPTION = "TIBCO SOURCES"
CLIENTVISIBLE = "YES" CLIENTEDITABLE = "YES" ACCESSWITHOUTKEY = "YES" COMPONENTVERSION =
"1"/>
The MEDOMAIN element has the following child elements:
• MEDEFINITION
• LIBRARY
MEDEFINITION Element
Metadata extensions extend the metadata stored in the repository by associating information with individual
repository objects. You can use the MEDEFINITION element to define a metadata extension. For example, you
have a third-party application and want to track the creation of new fields. You can create the USERNAME
metadata extension to store the name of the user that creates a new field.
You can use the attributes for the MEDOMAIN element to enable the Designer to view or edit the metadata
extension.
DATATYPE Required Enter a datatype for the metadata extension. You can enter STRING,
NUMERIC, BOOLEAN, or XML.
Note: If you set the DATATYPE attribute to XML or STRING, you must set the
MAXLENGTH attribute to a value greater than 0.
MAXLENGTH Optional Enter the maximum length for the metadata extension. You can specify a
value up to 2,147,483,647.
Note: If you set the DATATYPE attribute to XML or STRING, you must set the
MAXLENGTH attribute to a value greater than 0.
MEDOMAIN Element 57
Attribute Required/ Description
Optional
DBTYPE Required Enter the ID of the DBTYPE you want to use with the metadata extension.
You can also enter ALL to make the metadata extension available to all
DBTYPEs.
OBJECTTYPE Required Enter the name of the object type used with the metadata extension. You
can enter SOURCE, TARGET, MAPPING, MAPPLET, SESSION, WORKFLOW, or
WORKLET. You can also enter ALL to make the metadata extension available
to all OBJECTTYPEs.
DEFAULTVALUE Optional Enter the default value for the metadata extension.
ISSHAREREAD Optional Set to YES to enable shared reading for the metadata extension. Set to NO
to disable share reading for the metadata extension.
ISSHAREWRITE Optional Set to YES to enable shared writing for the metadata extension. Set to NO to
disable share writing for the metadata extension.
ISCLIENTVISIBLE Optional Set to YES to enable the Designer to display the metadata extension. Set to
NO to disable the Designer from displaying the metadata extension.
ISCLIENTEDITABLE Optional Set to YES to enable the Designer to edit the metadata extension. Set to NO
to disable the Designer from displaying the metadata extension.
The Java DB adapter uses the default PowerCenter Designer user interface to import source and target
definitions for Java DB. It does not use a client plug-in. Alternatively, you can build a Java DB client plug-in
that uses native drivers to get the JDBC data source.
• Plug-in definition file. The plug-in definition file for the Java DB adapter is named pmJDBC.xml. The
definition file includes elements that describe the data source and define how to connect, read, and write
to the data source.
• Server plug-in. The server plug-in file for the Java DB adapter is named pmJDBCplugin.jar. The server
plug-in includes JDBC reader and writer extensions to access, read, and write to the Java DB relational
database.
Using this example as a model, you can follow the same techniques to use the PowerExchange API to build
adapters for other JDBC compliant relational databases.
• Attributes that define how to connect to the database, including reader and writer properties that define
the run-time configuration.
• Attributes that define the datatypes in Java DB and how the datatypes map to the PowerCenter datatypes.
59
• Names of the client, reader, and writer plug-in binaries and version information.
• Metadata and repository IDs specific to the plug-in.
The pmJDBC.xml must be registered with a PowerCenter repository so that the PowerCenter design and run-
time environment can support the adapter.
Server Plug-in
The server plug-in includes the run-time reader and writer components. The reader component connects to
and reads from the data source. The writer component writes data to the target. The jar files for the reader
and writer plug-in must be placed in the CLASSPATH or /javalib directory.
PLUGIN
The PLUGIN element contains the PowerCenter repository ID attributes for the Java DB adapter. An adapter
that is distributed outside an organization requires unique repository ID attributes assigned by Informatica.
An adapter that is not distributed, such as the adapter example, can contain test values in the PLUGIN
element.
The following table lists the PowerCenter repository ID attributes for the sample adapter:
Attribute Value
Plugin Id 305050
PC Version 8.7
DBTYPE
The DBTYPE element represents the database type of the source or target and contains attributes to uniquely
identify the type of database.
For example, the NAME attribute identifies the name of the DBTYPE. The ID attribute refers to the database
type ID. The BASEID identifies the base ID for the datatypes of this database type.
The following code shows the DBTYPE definition in the pmJDBC.xml file:
<PLUGIN ID="305050" NAME="JDBC" VERSION="8.7.0" VENDORID="1" VENDORNAME="Informatica"
DESCRIPTION="PWX JDBC" >
<DBTYPE NAME="JDBC" ID="305050" BASEID="305050"
<DATATYPE ID ="305051" NAME ="CHAR" ODBCTYPE ="SQL_CHAR"
The following code shows the some of the DATATYPE definitions in the pmJDBC.xml file:
<DATATYPE ID ="305051" NAME ="CHAR" ODBCTYPE ="SQL_CHAR" INTERNALCONVERTABLE ="NO"
READONLYSCALE ="0"/>
<DATATYPE ID ="305051" NAME ="CHAR" ODBCTYPE ="SQL_WCHAR" INTERNALCONVERTABLE ="YES"/>
READONLYSCALE ="0"/>
<DATATYPE ID ="305052" NAME ="VARCHAR" ODBCTYPE ="SQL_LONGVARCHAR"
INTERNALCONVERTABLE ="YES"/>
<DATATYPE ID ="305052" NAME ="VARCHAR" DBCTYPE ="SQL_WVARCHAR"
INTERNALCONVERTABLE ="YES"/>
INTERNALCONVERTABLE ="YES"/>
The following table lists the JDBC datatypes and their corresponding PowerCenter ODBC datatypes. Each of
the datatype mapping has a DATATYPE element definition in the pmJDBC.xml:
CHAR SQL_CHAR
CHAR SQL_WCHAR
VARCHAR SQL_VARCHAR
VARCHAR SQL_LONGVARCHAR
VARCHAR SQL_WVARCHAR
LONGVARCHAR SQL_LONGVARCHAR
LONGVARCHAR SQL_WLONGVARCHAR
NUMERIC SQL_DECIMAL
NUMERIC SQL_NUMERIC
DECIMAL SQL_DECIMAL
DECIMAL SQL_MONEY
BIT SQL_BIT
BOOLEAN SQL_BIT
TINYINT SQL_TINYINT
SMALLINT SQL_SMALLINT
INTEGER SQL_INTEGER
BIGINT SQL_BIGINT
REAL SQL_FLOAT
REAL SQL_REAL
FLOAT SQL_DOUBLE
DOUBLE SQL_DOUBLE
BINARY SQL_BINARY
VARBINARY SQL_VARBINARY
LONGVARBINARY SQL_LONGVARBINARY
LONGVARBINARY SQL_IDENTITY
DATE SQL_DATE
TIME SQL_TIME
TIMESTAMP SQL_TIMESTAMP
CLOB SQL_LONGVARCHAR
BLOB SQL_BINARY
DBTYPETOWIDGETATTR
The following code shows the DBTYPETOWIDGETATTR element definition in the pmJDBC.xml file:
<DBTYPETOWIDGETATTR
OBJECTTYPE="TARGET"
WIDGETATTRIBUTENAME="Load Scope"
ISREADONLY="YES"
ISDISABLED="YES"
ISHIDDEN="NO"
ISEXPORTED="NO"
<MULTIVALUEATTRIBUTE NAME="transaction"/>
</DBTYPETOWIDGETATTR>
The DBTYPETOWIDGETATTR element defines the Load Scope type used in the Java DB adapter to commit
the target object. The Load Scope type depends on the commit type and commit interval configured in the
Workflow Manager for the JDBC session. In this example, the Load Scope value is transaction.
EXTENSION
To indicate that the Java DB adapter uses the PowerExchange API, the following attributes must be defined
for the reader extension and writer extension:
• LANG attribute. Specifies that the programming language for the reader and writer extensions is “JAVA”.
• CLASS NAME attribute. Specifies the fully qualified class name for the reader and writer extensions.
For example, the Java DB adapter has a reader extension that defines the LANG and CLASS NAME attributes
for the extension.
The following code shows the EXTENSION element definition in the pmJDBC.xml file:
<EXTENSION
DESCRIPTION ="JDBC"
EXTENSIONTYPE ="READER"
COMPONENTVERSION ="8.7.0"
EXTENSIONSUBTYPE ="305050"
SUPPORTPARTITIONS ="YES"
LANG = "JAVA">
:
:
<CLASS NAME ="com/informatica/powerconnect/JDBC/server/reader/JDBCReaderPlugin" />
:
:
</EXTENSION>
Reader Extension
The Integration Service uses the reader extension to read from a data source. You can define more than one
reader extension for a data source if the data source provides multiple interfaces. The Java DB adapter
requires one reader extension definition.
Use the ATTRIBUTE child element to define session attributes for the reader extension. The reader session
attributes you define for an adapter are accessible from the session editor in the Workflow Manager.
Tracing Level Tracing level for the log messages to send to the session log.
Pre SQL SQL statements to run before an SQL select statement is run on the source. The Pre
SQL statements use the same connection as the select statements.
Post SQL SQL statements to run after an SQL select statement is run on the source. The Post
SQL statements use the same connection as the select statements.
SQL Query Overrides the default SQL query with a custom SQL query.
Source Filter Sets a filter for the rows in the data source.
The following code shows an ATTRIBUTE element for the reader extension in the pmJDBC.xml file:
<EXTENSION
NAME ="JDBC Reader"
EXTENSIONTYPE ="READER"
COMPONENTVERSION ="1.0.0"
EXTENSIONSUBTYPE ="305050"
SUPPORTPARTITIONS ="Locally"
LANG = "JAVA">
ID="1"
TYPE="SQL"
DATATYPE ="STRING"
REFERENCELEVEL="DSQ"
ISREQUIRED ="NO"
DEFAULTVALUE =""
ISSESSIONOVERRIDABLE ="YES"
ISINSTANCEOVERRIDABLE="YES"
ISPARTITIONOVERRIDABLE="YES"
ISSESSIONVARSALLOWED ="YES"
ISSERVERVARSALLOWED="YES"
VARIABLEPREFIX="$PWX" />
:
:
Writer Extension
The Integration Service uses the writer extension to write to a data source. You can define more than one
writer extension for a data source if the data source provides multiple interfaces. The Java DB adapter
requires one writer extension definition.
Use the ATTRIBUTE child element to define session attributes for the writer extension. The writer session
attributes you define for an adapter are accessible from the session editor in the Workflow Manager.
The following table describes the session attributes defined for the Java DB writer extension in the
pmJDBC.xml file:
Update Update strategy to use when updating target data. You can use one of the following
update strategies:
- Update as Update. Perform an update on the target.
- Update as Insert. Perform an insert on the target.
- None. Perform no operation on the target.
- Update else Insert. Perform an update on the target. If the update is not
successful, perform an insert.
Truncate target option Truncate the target table before performing any operation.
Pre SQL SQL statements to run before an SQL select statement is run on the target. The Pre
SQL statements use the same connection as the select statements.
Post SQL SQL statements to run after an SQL select statement is run on the target. The Post
SQL statements use the same connection as the select statements.
The following code shows an ATTRIBUTE element for the writer extension in the pmJDBC.xml file:
<EXTENSION
NAME ="JDBC Writer"
EXTENSIONTYPE ="READER"
EXTENSIONSUBTYPE ="305050"
SUPPORTPARTITIONS ="Locally"
LANG = "JAVA">
<ATTRIBUTE ID="1"
NAME="Insert"
TYPE="BOOLEAN"
DATATYPE ="NUMBER"
ISREQUIRED ="NO"
DEFAULTVALUE ="1"
REFERENCELEVEL="TARGET"
VARIABLEPREFIX="$PWX"
ISVARPREFIXALLOWED="YES"
ISSERVERVARSALLOWED="YES"
ISSESSIONOVERRIDABLE ="YES"
ISINSTANCEOVERRIDABLE="YES"
ISPARTITIONOVERRIDABLE="YES"/>
:
:
CONNECTION
The CONNECTION element defines the attributes of the connection. The connection attributes you define for
an adapter are accessible from the Connection Tab in the Workflow Manager. You can create a new JDBC
connection that is relational and that can be used for a JDBC reader or writer. The connection string takes a
JDBC URL that points to the database location. For example: jdbc:derby://localhost:1527/firstdb
JDBC Driver Name JDBC driver class name to load for JDBC calls. To add the JDBC driver jar file to
the CLASSPATH and load the class by default, copy the JDBC driver jar file to the
following directory: server/bin/javalib
For the Java DB database, the JDBC driver name is
“org.apache.derby.jdbc.ClientDriver”
Connection Environment SQL Connection environment SQL to run each time Integration Service connects with
the JDBC database. You can use this attribute to set up the environment for
subsequent transactions.
This is an optional attribute.
Transaction Environment SQL Transaction environment SQL to run each time a new transaction is started in the
external database.
This is an optional attribute.
Connection Retry Period Length of time in seconds that the Integration Service attempts to re-connect to
the database if the connection fails.
The following code shows a CONNECTION element defined in the pmJDBC.xml file:
<CONNECTION NAME ="PWX JDBC"
HASCODEPAGE ="YES"
HASUSERNAME ="YES"
CONNECTIONTYPE ="RELATIONAL"
HASPERMISSIONS ="NO"
HASUSERPASSWORD ="YES"
COMPONENTVERSION ="1.0.0"
HASCONNECTSTRING ="YES"
CONNECTIONSUBTYPE ="305050">
<ATTRIBUTE ID ="1"
CONNECTIONTOEXTENSION
The following code shows a CONNECTIONTOEXTENSION element defined in the pmJDBC.xml file that sets
the connection properties for the reader and writer extensions:
<CONNECTIONTOEXTENSION
EXTENSIONTYPE="READER"
EXTENSIONSUBTYPE="305050"
CONNECTIONTYPE="RELATIONAL"
CONNECTIONSUBTYPE="305050"
CONNECTIONNUMBER="1"
ISDEFAULT="YES"
SUPPORTPARTITIONS="YES"
ISFORALLPARTITIONS ="YES"/>
<CONNECTIONTOEXTENSION
EXTENSIONTYPE="WRITER"
EXTENSIONSUBTYPE="305050"
CONNECTIONTYPE="RELATIONAL"
MEDOMAIN
The MEDOMAIN element contains all the metadata extension attributes. The metadata extensions enable you
to add custom attributes to transformations, sources, and targets required to support the adapter. The
metadata extension attributes you define for an adapter are accessible from the Source Qualifier editor in a
mapping that contains a Java DB source.
The following table describes the metadata extensions defined for the Java DB writer extension in the
pmJDBC.xml file:
SQL Query Overrides the default SQL query with a custom SQL query.
Source Filter Sets a filter for the rows in the data source.
The following code shows the MEDOMAIN element defined in the pmJDBC.xml file:
<MEDEFINITION
NAME = "Select Distinct"
DATATYPE = "STRING"
MAXLENGTH = "3"
OBJECTTYPE = "APPLICATIONDSQ"
DEFAULTVALUE = ""
DESCRIPTION = "Select Distinct"
ISSHAREREAD = "YES"
ISSHAREWRITE = "YES"
ISCLIENTVISIBLE = "YES"
ISCLIENTEDITABLE = "YES"/>
The JDBC reader session starts with the JDBC source in the mapping. The session (pmdtm.exe) loads the
PowerExchange API server framework (pmsdksrv.dll) and the PowerExchange API for Java framework
(pmjsdk.dll). The PowerExchange API for Java framework reads the Java class name from the repository as
defined in the CLASSNAME of the reader extension section in the pmJDBC.xml:
<CLASS NAME ="com/informatica/powerconnect/JDBC/server/reader/JDBCReaderPlugin" />
The PowerExchange API for Java framework searches for this class in the CLASSPATH or PowerCenter /
javalib directory and then calls the CreatePluginDriver() method. The method returns a
JDBCReaderPluginDriver object (com/informatica/powerconnect/JDBC/server/reader/
JDBCReaderPlugindriver).
The PowerExchange API for Java framework initializes the JDBCReaderPlugindriver object and calls the
CreateSQDriver method, which returns a JDBCReaderSQDriver object reference. The PowerExchange API for
Java framework initializes the JDBCReaderSQDriver object and calls the createPartitionDriver method. The
createPartitionDriver method gets the Source Qualifier field metadata from the session extension and creates
The PowerExchange API for Java framework initializes the JDBCReaderPartitionDriver object and calls the
run method with the OutputBuffer(IOutputBuffer) as parameter. The run method runs the reader query and
loads the resultset in the OutputBuffer. The data in the OutputBuffer can be used in a transformation and
then written to the target.
The PowerExchange API for Java framework deinitialize the JDBCReader objects in LIFO order.
Writer Session
The following diagram shows the sequence of calls made during a writer session:
The JDBC writer session is called when a mapping or session contains a JDBC target. The session
(pmdtm.exe) loads the PowerExchange API server framework (pmsdksrv.dll) and the PowerExchange API for
The PowerExchange API for Java framework initializes the JDBCWriterGroupDriver object and calls the
createWriterPartitionDriver method, which creates the linked target field vector and passes it to the
WriterPartitionDriver object.
The PowerExchange API for Java framework initializes the JDBCWriterPartitionDriver object and calls the run
method with the InputBuffer as parameter. It prepares the required SQL query, binds the data in the prepared
queries and loads to the target table.
The PowerExchange API for Java framework deinitialize the JDBCWriter objects in LIFO order.
Adapter Processes
Datatype Conversion
The JDBC datatypes must be converted to the PowerCenter datatypes for processing. The Source Qualifier in
the Designer displays the JDBC datatypes and converts them to the PowerCenter datatypes.
The following table lists the JDBC datatypes and the equivalent PowerCenter datatypes:
Char String
Varchar String
LongVarchar String
Numeric Decimal
Decimal Decimal
Real Float
Double Double
Float Double
Date Date/time
Int Integer
Long Text
Binary Binary
Varbinary Binary
LongVarBinary Longvarbinary
Blob Binary
Clob Text
The following table shows the possible datatype conversions from JDBC to PowerCenter:
boolean X X
Short / X X X X
Small int
Int X X X X X
long X X X X
float X X X
double X X
String X X X X X X X X
Date/Time X X
Binary X
The JDBCReaderPartitionDriver object uses an IOutputBuffer outBuff(reader Buffer) as input parameter. The
output buffer gets the data returned by the SELECT query from the reader data source. The
JDBCReaderPartitionDriver object callsIOutputBuffer.readData() for each column and stores data in the
output buffer for each row of the resultset object.
During a source-based commit session, the Integration Service commits data to the target based on the
number of rows from active sources in a target load order group. These rows are referred to as source rows.
When the Integration Service runs a source-based commit session, it identifies the commit source for each
pipeline in the mapping. The Integration Service generates a commit row from these active sources at every
commit interval.
During a target-based commit session, the Integration Service commits rows based on the number of target
rows and the key constraints on the target table. The commit point depends on the following factors:
• Commit interval. The number of rows to use as a basis for commits. Configure the target commit interval
in the session properties.
• Writer wait timeout. The amount of time the writer waits before it issues a commit. Configure the writer
wait timeout when you set up the Integration Service in the Administration Console.
• Buffer Blocks. Blocks of memory that hold rows of data during a session. You can configure the buffer
block size in the session properties. You cannot configure the number of rows that the block holds.
When you run a target-based commit session, the Integration Service can issue a commit before, on, or after
the configured commit interval. The Integration Service uses the following process to determine when to
issue commits:
• When the Integration Service reaches a commit interval, it continues to fill the writer buffer block. When
the writer buffer block fills, the Integration Service issues a commit.
• If the writer buffer fills before the commit interval, the Integration Service writes to the target, but waits to
issue a commit. It issues a commit when one of the following conditions is true:
- The writer is idle for the amount of time specified by the Integration Service writer wait timeout option.
- The Integration Service reaches the commit interval and fills another writer buffer.
Partition Support
The Java DB adapter implements pass-through partitions. You can create multiple pass-through partitions for
a session. Each partition runs on a separate thread. For each partition, specify the JDBC reader and writer
session attributes such as custom query and the number of sorted ports. All partitions share the same
connection.
Error Handling
All errors or exceptions are written to a session log. The Java DB adapter creates a message catalog object
with a message file name. The message file contains the messages for each error, warning, or information.
ReaderConstants.JDBC_RDR_MSGFILE);
The following code shows how an error message is written to the session log:
//get the sq instnce
IAppSQInstance appSqInstace = (IAppSQInstance)lstSqInstances.get(sqIndex);
//get session extn for the dsq
ISessionExtension sessExtn = session.getExtension(EWidgetType.DSQ, sqIndex);
if(sessExtn == null){
//there is no error or exception given by the frame work.
//so we have to make this check and throw SDK Exception if needed
String dsqName = appSqInstace.getName();
SDKMessage msg = rdrCat.getMessage("2002_ERR_SESS_EXT", dsqName);
throw new SDKException(msg);
}
The message for error 2002_ERR_SESS_EXT is extracted from JDBC_RDR_MSGFILE (JDBCRdrMsg.xml) and
logged to the session log.
The following code shows another type of message written to the session log:
try
{
m_resultset = m_Stmt.executeQuery(jdbcStmt);
}
catch (SQLException ex) {
msg =rdrCat.getMessage("2027_SELECT_QRY_EXEC_FAIL",ex.getMessage());
JDBCPluginException.JDBCSQLException(utilsSrv, ex);
throw new JDBCPluginException(msg);
}
The generic exception in the JDBC reader is generated from running a query. The message for error
2027_SELECT_QRY_EXEC_FAIL is extracted from the JDBCRdrMsgs.properties file and logged to the session
log.
The following code shows an informational message written to the session log:
SDKMessage msg = rdrCat.getMessage("2001_DSQ_CREATED",appSqInstace.getName());
utilsSrv.logMsg(ELogMsgLevel.INFO, msg);
You can use the Java DB adapter in the same way that you use other PowerExchange adapters.
This example shows how to create a custom transformation to load a large volume of data into a database
and call the custom transformation from a PowerCenter mapping. The example loads data from a text file
into a MySQL database using the MySQL bulk loader. You can use the same technique to load bulk data into
other databases that have bulk loader utilities
The example uses the following components to perform the bulk load:
• Bulk loader custom transformation. This custom transformation writes source data to a text file and then
calls the bulk loader utility to move the data to the MySQL database. The bulk loader custom
transformation consists of server and client DLL files. It requires a plug-in XML that you must register in a
PowerCenter repository.
• Mapping that calls the bulk loader custom transformation. After you register the bulk loader custom
transformation, you can create a mapping to call the transformation. The attributes available for the
custom transformation instance in the mapping correspond to the attributes you set in the plug-in XML.
76
Bulk Loader Transformation
The bulk loader custom transformation encapsulates the processes of writing the data to a text file and
invoking the bulk loader utility with the given parameters.
Data Structure
Determine the source data you want to use. The structure of the source data determines the target ports and
the structure of the text file you create. The text file determines the structure of the database table where you
load the data. When you create the mapping in PowerCenter, you can create or import the structure of the
source data.
The mysqlimport utility creates a database table and names it based on the name of the text file that it reads
the data from. In this example, the name of the database table is the same as the name of the text file.
The example uses the following command to run the MySQL bulk loader:
mysqlimport
--local
--fields-terminated-by=,
--lines-terminated-by="\r\n"
--host=localhost
--port=3306
MySQLDB
$OutputFileName
--user=root
--password=MyPassword
--delete
--columns=F1, F2, F3, F4, F5
Option Description
mysqlimport Client program that provides a command line interface to the LOAD DATA command.
The options for mysqlimport correspond to clauses of LOAD DATA command syntax.
The mysqlimport client program is located in the /bin directory of the MySQL directory.
MySQLDB Name of the MySQL database into which the bulk loader loads the data.
$OutputFileName Text file from which the bulk loader reads the data it moves into the database table.
In this example, the variable $OutputFileName contains the name of the text file, which
is equivalent to the name of the table to which mysqlimport utility writes the data. This
variable is also used in the Output Filename attribute of the target and the Datafile
attribute of the custom transformation in the PowerCenter session.
--columns=F1, F2, F3, F4, F5 List of columns in the database table. The column names in the list must be the same
as the column names for the target table and the port names in the custom
transformation. In this example, you can drag and drop the ports from the source to the
custom transformation to create ports with the correct names.
The parameters for the mysqlimport utility are included in the plug-in XML file for the bulk loader
transformation. They determine the metadata extension attributes available for the transformation in a
PowerCenter mapping. The name of the text file from which you load data must match the target file name.
The port names of the custom transformation must match the column names of the target in the mapping.
When you build a bulk loader custom transformation for another database based on this example, the
parameter requirements may be different. Read the documentation for the database to determine the
requirements of the bulk loader utility.
The plug-in XML for the example custom transformation includes the parameters for the MySQL bulk loader.
If you implement a bulk loader custom transformation for another database, the plug-in XML must include the
bulk loader parameters for the database that you are implementing.
Many attributes in the plug-in XML for the bulk loader custom transformation example are disabled to prevent
misuse. If you set the session extension attribute ISSESSIONOVERRIDABLE and the metadata extension
attribute ISCLIENTVISIBLE to YES, you can override the disabled attributes in the PowerCenter Client tools.
The plug-in XML for the example custom transformation sets the ISSESSIONOVERRIDABLE and
ISCLIENTVISIBLE attributes to YES so you can modify the values in the Designer and Workflow Manager.
Note: The following XML code snippets do not constitute the full content of the plug-in XML file and may not
be syntactically correct.
The plug-in XML for the example custom transformation is named pmbulkloadtransform.xml and includes the
following definitions and session extensions:
PLUGIN NAME ="Bulk Load Transformation" ID="305150" VENDORNAME ="Informatica" VENDORID
="1" DESCRIPTION ="Bulk Load Transformation" VERSION ="8.6.1">
<ALLOWEDTEMPLATE TEMPLATEID="305150"/>
</EXTENSION>
<!-- ************ Datafile path needs to be same as target file name ************* -->
<MEDEFINITION NAME="DATAFILE"
DEFAULTVALUE = "$OutputFileName"
<!-- ********* Column names will be populated from the CT ports programmatically
********* -->
<MEDEFINITION NAME="COLUMNS"
DEFAULTVALUE = "--columns="
<!-- ************** Other parameters that the third party loader takes ************** -->
<MEDEFINITION NAME="OTHERPARAMS"
DEFAULTVALUE = "--delete"
:
:
</MEDOMAIN>
</TEMPLATE>
</PLUGIN>
Compile the code for the client and copy the DLL file to the \client\bin directory of the PowerCenter Client.
The server plug-in allows the custom transformation to pass data to the file writer and create the command
to invoke the bulk loader. After the file is written, the custom transformation reads the command and
required parameters and invokes the bulk loader during the deinit() call.
The bulk loader custom transformation example includes a file named BulkLoadCTPartitionDriver.cpp with
the code to pass data to the file writer and invoke the MySQL bulk loader. To create a bulk loader custom
transformation for another database, you can modify the code to call the bulk loader utility of the new
database with the appropriate parameters.
The following snippet from the server plug-in code shows how the custom transformation invokes the bulk
loader:
ISTATUS BulkLoadCTPartitionDriver::deinit(void)
{
...
return ISUCCESS;
/* Creates the third party loader parameters and executes the process*/
ISTATUS BulkLoadCTPartitionDriver::createLoaderProcess()
{
HANDLE nProcessID;
//Get the Bulkload CT session extension attributes defined as third party Loader
Parameters.
IINT32 isParamFileEnabled = 0;
//If true, read the Loader attributes from the parameter file else get from metadata
extension and session attributes
if(isParamFileEnabled)
{
if (IFAILURE == m_pSessExtn->getAttribute(gBulkLoad_ParamFileName,
m_sBulkLoadParameterFileName))
{
return IFAILURE;
}
PmUString sExeName;
//Override the CT bulk load session extension attribute "Loader Exe Path" with
metadata extension attributes.
if(m_sSessExtnLoaderExe.isEmpty() || m_sSessExtnLoaderExe.getLength() == 0)
{
sExeName = m_sLoaderExe + gBulkLoad_space;
}
else
{
sExeName = PmUString(m_sSessExtnLoaderExe) + gBulkLoad_space;
}
.
.
.
IUString sCmnLine;
if(!isParamFileEnabled)
{
if(m_pBulkLoadUtils->getUtilsServer()->expandString(sCmdLineParams.buffer(),
sCmnLine,
IUtilsServer::EVarParamSession, IUtilsServer::EVarExpandDefault) != ISUCCESS)
{
return IFAILURE;
}
//Start the Loader process by passing the command line parameters
nProcessID = BLExtrProcess::start(sExeName,PmUString(sCmnLine),
m_pILog,m_pCatalog);
}
else
{
//Start the Loader process by passing the command line parameters
nProcessID = BLExtrProcess::start(sExeName,sCmdLineParams, m_pILog,m_pCatalog);
if ( nProcessID == 0 )
{
return IFAILURE;
}
if (BLExtrProcess::finish(nProcessID, m_pILog,m_pCatalog,sExeName) != ISUCCESS)
{
return ISUCCESS;
}
The following figure shows the pass-through mapping with the bulk loader custom transformation:
The target for the mapping is a flat file target. The target file is the text file that the custom transformation
writes data to. When the custom transformation completes writing to the target, it invokes the bulk loader
utility and writes the data from the text file to the database. The name of the text file is the same as the name
of the database table that the bulk loader writes to.
Since the mapping is a pass through mapping, the column names are linked from the source to the custom
transformation and from the custom transformation to the target.
• In the mapping, set the parameter values in the Metadata Extension tab of the bulk loader custom
transformation. The Metadata Extension tab displays the bulk loader parameters defined in the plug-in
XML file.
The following figure shows the parameters that display in the Metadata Extension tab when you edit the
bulk loader custom transformation:
• In the session task, set the parameter values in the Transformations node and the Targets node of the
Mapping tab. The Transformations node and Targets node display attributes defined in the plug-in XML
file.
The value of the Datafile attribute of the transformation must be the same as the value of the Output
Filename attribute of the target. In the example, the $OutputFileName variable is used for both attributes
in the session. You can use the actual file name instead of the $OutputFileName variable. The file name
must match the file name you provide in the plug-in XML and the bulk loader command.
The following figure shows the parameters for the bulk loader transformation in the session:
• If you select the Enable Parameter File attribute in the session, you can set the parameter values in a
parameter file. The values in the parameter file override the values you set for the transformation in the
session.
The parameter file for the bulk loader custom transformation example has the following contents:
[Global]
[Service:DI_861]
[mysql.WF:wf_m_mysqlload]
[mysql.WF:wf_m_mysqlload.ST:s_m_mysqlload]
$Param_LOADERPATH=C:\Program Files\MySQL\MySQL Server 5.0\bin\mysqlimport
$Param_LOCATION=--local
$Param_FIELDSEPARATOR=--fields-terminated-by=,
$Param_LINETERMINATOR=--lines-terminated-by="\r\n"
$Param_HOSTNAME=--host=localhost
$Param_HOSTPORT=--port=3306
$Param_DATABASENAME=mysqldb
$Param_USER=--user=root
$Param_PASSWORD=--password=asahu
$Param_DATAFILE=ins_tgt
$Param_OTHERPARAMS=--delete
$Param_COLUMNS=--columns=
;This variable need not be prefixed with the Param_ as designed.
$OutputFileName=ins_tgt
To run the example on other platforms, recompile the libraries on your platform with your compiler. Based on
this example, you can create bulk loader transformations for other databases that have bulk load utilities.
You can run the bulk loader custom transformation example to see how it works. Before you run the bulk
loader example, verify that your installation of the MySQL database has the bulk loader utility.
1. In the MySQL database server, create a database named mysqldb for the example. Set up the user
account and password and verify the host name and port number for MySQL.
2. Extract the bulk loader example files to a temporary directory.
3. Copy the following client libraries from the <BulkLoaderTempDir>\client\release directory to the \client
\bin directory of the PowerCenter Client:
pmbulkloadvldn.dll
pmbulkloadtransformres411.dll
pmbulkloadtransformres409.dll
pmbulkloadtransform.dll
4. Copy the following resource files from the <BulkLoaderTempDir>\server\release directory to the \client
\bin directory of the PowerCenter Client:
pmbulkload_ja.res
pmbulkload_en.res
Component Modification
BulkLoadParam.prm Modify the LOADERPATH parameter to point to the location of the mysqlimport
client program. Modify the host name and port, database name, and user name
and password to match your MySQL server and database information.
pmbulkloadtransform.xml Modify the LOADERPATH parameter to point to the location of the mysqlimport
client program. Modify the host name and port, database name, and user name
and password to match your MySQL server and database information. Modify all
instances of the attributes in the file.
Bulk loader custom After you import the mapping example into PowerCenter, edit the bulk loader
transformation custom transformation. In the Properties tab, set the Runtime Location attribute
to point to the location of the PowerCenter libraries. The default location for the
PowerCenter libraries is <PowerCenterDir>/server/bin. This is the directory where
you copied the server libraries for the example in step 5.
11. On the Administrator Tool, register the plug-in XML file from the <BulkLoaderTempDir>\repository
directory to a repository service:
pmbulkloadtransform.xml
12. Use the following file from the <BulkLoaderTempDir>\client directory to register the client libraries:
bulkloadtransform.reg
13. Run the workflow.
• View the error messages in the session log files. If the session log shows an error in the bulk load
command, verify that the parameters you pass to the mysqlimport client program are correct. Verify that
the database user name and password and the columns for the table correspond to the database and
table you are loading data into.
• Review the plug-in XML and verify that the following attributes are set to the correct values:
- LOADERPATH. This attribute must point to the location of the bulk loader utility. In the example, the
LOADERPATH must point to the location of the mysqlimport command.
- DATAFILE. This attribute must be set to the name of the table where the bulk loader loads the data. In
the example, the default value is set to the $OutputFileName variable. Verify that this variable is defined
in the session. Otherwise change the value to the name of the target file name. The value of the
DATAFILE attribute must be the same as the target file name without the extension.
- DATABASENAME. Name of the MySQL database that contains the table where the bulk loader loads the
data.
• If you enable parameter files for the session, verify that the values set for the parameters in the parameter
file are correct. Verify that the folder, workflow, and session are correct in the file header.
• If the runtime libraries do not load correctly, recompile the example client and server libraries and link to
the libraries included in the Informatica Development Platform (IDP). Ensure that the version of the APIs
you link to is the same as the version of PowerCenter you run the example on. The API libraries are
located in the folder <IDPInstallationDir/ SDK/PowerCenter_Connect_SDK/lib.
Design API
This chapter includes the following topics:
90
store the configuration settings to connect to the repository. Alternatively, you can use the Design API
functions to configure the connection to the repository.
The following sample code shows how to connect to the repository and browse through the folders and their
contents:
The following sample code shows how to set the security domain and Kerberos property:
// Sets the security domain.
rep.getRepoConnectionInfo().setSecurityDomain(SecurityDomainName);
Creating Objects
This section describes concepts involved in using the Design API to create and work with objects in
PowerCenter.
The following sample code shows how to create repository and folder objects:
/**
* Creates a repository
*/
protected void createRepository() {
rep = new Repository( "repo1", "repo1", "This repository contains API test
samples" );
}
/**
* Creates a folder
*/
protected void createFolder() {
folder = new Folder( "Folder1", "Folder1", "This is a folder containing java
mapping samples" );
rep.addFolder( folder );
}
You can use the Design API to create source and target objects for the following data sources:
• Flat file (fixed or delimited). The Design API supports flat files for source and target objects.
• Relational databases. The Design API supports the following types of relational databases for source and
target objects:
- DB2
- MS SQL Server
- Sybase
- Informix
- Teradata
• ODBC data sources. Includes connections to Netezza and Neoview.
The following sample code shows how to create a file source object. This example uses the field object to
hold the metadata for each field. A vector of fields contains all the fields for the source. The example also
creates a flat file source object.
protected Source createOrderDetailSource() {
List<Field> fields = new ArrayList<Field>();
Field field1 = new Field("OrderID", "OrderID","", NativeDataTypes.FlatFile.INT,
"10", "0",FieldKeyType.FOREIGN_KEY, FieldType.SOURCE, false);
fields.add(field1);
Field field2 = new Field("ProductID", "ProductID","", NativeDataTypes.FlatFile.INT,
"10", "0", FieldKeyType.FOREIGN_KEY, FieldType.SOURCE, false);
fields.add(field2); Field field3 = new Field("UnitPrice", "UnitPrice","",
NativeDataTypes.FlatFile.NUMBER, "28", "4", FieldKeyType.NOT_A_KEY, FieldType.SOURCE,
false);
fields.add(field3);
Field field4 = new Field("Quantity", "Quantity","", NativeDataTypes.FlatFile.INT,
"10", "0", FieldKeyType.NOT_A_KEY, FieldType.SOURCE, false);
fields.add(field4);
Field field5 = new Field("Discount", "Discount","", NativeDataTypes.FlatFile.INT,
"10", "0", FieldKeyType.NOT_A_KEY, FieldType.SOURCE, false);
fields.add(field5);
Field field6 = new Field("VarcharFld", "VarcharFld","",
NativeDataTypes.FlatFile.STRING, "5", "0", FieldKeyType.NOT_A_KEY, FieldType.SOURCE,
false);
fields.add(field6);
Field field7 = new Field("Varchar2Fld", "Varchar2Fld","",
NativeDataTypes.FlatFile.STRING, "5", "0", FieldKeyType.NOT_A_KEY, FieldType.SOURCE,
false);
fields.add(field7);
ConnectionInfo info = getFlatFileConnectionInfo();
info.getConnProps().setProperty(ConnectionPropsConstants.SOURCE_FILENAME,"Order_Details.c
sv");
Source ordDetailSource = new Source( "OrderDetail", "OrderDetail", "This is Order Detail
Table", "OrderDetail", info );
ordDetailSource.setFields( fields );
return ordDetailSource;
}
infoProps.getConnProps().setProperty(ConnectionPropsConstants.FLATFILE_DELIMITERS,";");
infoProps.getConnProps().setProperty(ConnectionPropsConstants.DATETIME_FORMAT,"A 21
yyyy/mm/dd hh24:mi:ss");
Creating Objects 93
infoProps.getConnProps().setProperty(ConnectionPropsConstants.FLATFILE_QUOTE_CHARACTER,"D
OUBLE");
return infoProps;
}
You can also create target tables with similar code. In addition, you can create target objects dynamically
when you create mappings.
Creating Mappings
Mappings are complex objects representing the data flow between sources and targets and the
transformations to move data from sources to targets. The mapping object stores the links from the source
objects through one or more transformation objects to the target objects. The links connect the ports from
one object to the next object in the data flow.
Data Flow
Data flow linkage in the mapping is done on an exception basis. The Design API allows you to specify the
dataflow to and from the transformations ports you want to use. The ports that are not necessary for the
data transformation flow automatically. This approach simplifies programmatic specification of the
mapping.
• Rowset. A class that contains a collection of field objects that represents input to a transformation or
target or output from a transformation or source. The rowset corresponds to a single group of ports in a
transformation, source, or target.
• Input set. A class that contains a rowset that represents one group of input ports to a transformation. The
class also has the corresponding propagation and linking context objects that determine what ports are
propagated and how they are linked to a downstream transformation. The input set is used whenever a
new transformation is created in the data flow, and defines the input ports to the new transformation.
Note that multiple input sets will be needed for transformations and targets that are multi-group.
• Output set. This class encapsulates the output of a transformation. It could contain a single rowset or
multiple rowsets depending on if it represents a single group or multi-group output. For example, the
output set for a Filter transformation contains one rowset, but a Router transformation contains multiple
rowsets.
By default, all ports are propagated from the input set. You can use PortPropagationContextFactory class to
define the propagation strategy and control which ports are propagated. You can use one of the following
propagation strategies:
Creating Objects 95
// propagate only Manufacturer_Name
vInputSets.add(new InputSet(lookupRS, lkpRSContext));
The following code example shows how to use the exclude rule to propagate ports:
PortPropagationContext exclOrderCost
=PortPropagationContextFactory .getContextForExcludeColsFromAll(new String[]
{ "OrderCost" });
// exclude
• Port link context object. Context object for passing the object information needed for linking ports. The
values of the context object depend on the link type. Port link context indicates which strategy is used to
connect input ports to ports in the downstream transformation.
You can use one of the following linking strategies:
• By Name. Link ports based on matching names. Use this strategy when port names between the from and
to transformations are the same. This is the default linking strategy.
• By Position. Link ports based on position. The first input port connects to the first port in the
transformation, the second input port connects to the second port in the transformation. Use this strategy
to link ports by matching their positions.
• By Hashmap. Link ports based on a map that lists the from and to ports. Use this strategy to link ports
based on a pre-defined list of matched names. Use this strategy to connect ports to targets where the
target ports are different from the incoming port names.
The following sample code show how to link ports by position. The ports are linked from the Source Qualifier
transformation to the Expression transform, in the order of the ports in the Source Qualifier.
public List<Field> getLinkFields() {
List<Field> fields = new ArrayList<Field>();
Field field1 = new Field( "EmployeeID1", "EmployeeID1", "",
TransformationDataTypes.INTEGER, "10", "0",
FieldKeyType.PRIMARY_KEY,FieldType.TRANSFORM, true );
fields.add( field1 );
Field field2 = new Field( "LastName1", "LastName1", "",
TransformationDataTypes.STRING, "20", "0",
FieldKeyType.NOT_A_KEY,FieldType.TRANSFORM, false );
fields.add( field2 );
Field field3 = new Field( "FirstName1", "FirstName1", "",
TransformationDataTypes.STRING, "10", "0", FieldKeyType.NOT_A_KEY ,
FieldType.TRANSFORM, false );
fields.add( field3 );
return fields;
}
// write to target
mapping.writeTarget( expRS, outputTarget );
The following sample code and mapping shows how to use a hashmap to link ports:
// create a stored procedure transformation
List<TransformField> vTransformFields = new ArrayList<TransformField>();
Field field1 = new Field( "RetValue", "RetValue", "This is return
value",TransformationDataTypes.INTEGER, "10", "0",
FieldKeyType.NOT_A_KEY,FieldType.TRANSFORM, false );
TransformField tField1 = new TransformField( field1, PortType.RETURN_OUTPUT);
vTransformFields.add( tField1 );
Field field2 = new Field( "nID1", "nID1", "This is the ID field",
TransformationDataTypes.INTEGER, "10", "0", FieldKeyType.NOT_A_KEY,
FieldType.TRANSFORM, false );
TransformField tField2 = new TransformField( field2, PortType.INPUT);
// vTransformFields.add( tField2 );
Field field3 = new Field( "outVar", "outVar", "This is the Output
field",TransformationDataTypes.STRING, "20", "0",
FieldKeyType.NOT_A_KEY,FieldType.TRANSFORM, false );
TransformField tField3 = new TransformField( field3, PortType.INPUT_OUTPUT );
vTransformFields.add( tField3 );
Related Topics:
• “Sample Patterns for Regular Expressions for Port Propagation” on page 178
Creating Transformations
The Transformation helper class simplifies the process of creating transformations in a mapping object.
Creating Objects 97
You can use the Design API to create the following types of transformations:
• Aggregator
• Application Source Qualifier
• Custom
• Data Masking
• Expression
• External Procedure
• Filter
• HTTP
• Input
• Java
• Joiner
• Lookup
• Mapplet
• Normalizer
• Rank
• Router
• Sequence Generator
• Sorter
• Source Qualifier
• SQL
• Stored Procedure
• Transaction Control
• Union
• Update Strategy
• XML Generator
• XML Parser
• XML Source Qualifier
The following sample code shows how use the transformation helper class to create a Lookup
transformation. Note that only the Manufacturer_id is linked to the Lookup transformation, and the
Manufacturer_Name is propagated to the target from the lookup.
// create dsq transformation
OutputSet outputSet = helper.sourceQualifier(itemsSrc);
RowSet dsqRS = (RowSet) outputSet.getRowSets().get(0); PortPropagationContext
dsqRSContext = PortPropagationContextFactory.getContextForExcludeColsFromAll(new
String[] { "Manufacturer_Id" });
// write to target
mapping.writeTarget(vInputSets, outputTarget);
Use the Design API to create a Session object from the Mapping object. You can set the attributes of the
Session object, including connectivity to the source and target. You can create a Workflow object with one or
more tasks objects.
The following sample code shows how to create a workflow with a single session:
/**
* Create session
*/
protected void createSession() throws Exception {
session = new Session( "Session_For_Filter", "Session_For_Filter",
"This is session for filter" );
session.setMapping( this.mapping ); }
}
/**
* Create workflow
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow( "Workflow_for_filter", "Workflow_for_filter","This
workflow for filter" );
workflow.addSession( session );
folder.addWorkFlow( workflow );
}
The following sample code shows how to create a workflow with multiple tasks:
private void createTasks() {
assignment = new Assignment("assignment","assignment","This is a test assignment");
assignment.addAssignmentExpression("$$var1", "1");
assignment.addAssignmentExpression("$$var2", "$$var1 + 5");
assignment.addAssignmentExpression("$$var1", "$$var2 -10");
control = new Control("control","control","This is a test control");
control.setControlOption(Control.ControlOption.ABORT_PARENT);
assignment.connectToTask(control,"$assignment.ErrorCode != 0");
decision = new Decision("decision","decision","This is a test decision");
decision.setDecisionExpression("1 + 2");
absTimer = new Timer("absTimer","absTimer","absolute timer",
TimerType.createAbsoluteTimer(new Date()));
decision.connectToTask(absTimer);
relTimer = new Timer("relTimer","relTimer","relative timer",
TimerType.createRelativeToPreviousTaskTimer(3, 5,
10,TimerType.TIMER_TYPE_START_RELATIVE_TO_TOPLEVEL_WORKFLOW));
absTimer.connectToTask(relTimer);varTimer = new
Timer("varTimer","varTimer","variable timer", TimerType.createVariableTimer("$
$timerVar"));
relTimer.connectToTask(varTimer);
command = new Command("command","command","This is a test command");
command.addCommand("command1", "ls"); command.addCommand("command2", "ls -lrt");
command.addCommand("command1", "df -k .");
varTimer.connectToTask(command);
email = new EMail("myEmail","myEmail","my email task");
email.setEmailUsername("[email protected]");
email.setEmailSubject("Welcome to Informatica");
email.setEmailText("This is a test mail");
command.connectToTask(email);
}
protected void createWorkflow() throws Exception {
Creating Objects 99
workflow = new Workflow("Workflow_for_OtherTasks","Workflow_for_OtherTasks", "This
workflow for other types of tasks");
WorkflowVariable wfVar1 = new WorkflowVariable("$
$var1",WorkflowVariableDataTypes.INTEGER,"1","var1 ");
WorkflowVariable wfVar2 = new WorkflowVariable("$
$var2",WorkflowVariableDataTypes.INTEGER,"1","var2 ");
WorkflowVariable wfVar3 = new WorkflowVariable("$
$timerVar",PowerMartDataTypeConstants.TIMESTAMP,"","timerVariable ");
workflow.addWorkflowVariable(wfVar1);
workflow.addWorkflowVariable(wfVar2);
workflow.addWorkflowVariable(wfVar3);
createTasks();
workflow.addTask(assignment);
workflow.addTask(control);
workflow.addTask(decision);
workflow.addTask(command);
workflow.addTask(absTimer);
workflow.addTask(relTimer);
workflow.addTask(varTimer);
workflow.addTask(email);
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
The following workflow shows the tasks created by the previous code:
try
{
myRepo.updateConnection(connObj);
} catch (RepoConnectionObjectOperationException e)
{
e.printStackTrace();
}
You can also use the Design API to export and import metadata in the PowerCenter repository. Use the
pcconfig.properties file to specify the repository connection information and the import and export options.
The following example shows the contents of a sample pcconfig.properties file that includes attributes for
connecting to the domain and repository.
PC_CLIENT_INSTALL_PATH=client path ;the path where PowerCenter Client is installed
PC_SERVER_INSTALL_PATH=server path ;the path where the PowerCenter Server is installed
TARGET_FOLDER_NAME=demomapp ;the folder name
TARGET_REPO_NAME=repo123 ;the repository containing the folder
REPO_SERVER_HOST=S158244 ;the host machine name on the network
REPO_SERVER_PORT=5001 ;the repository server port
ADMIN_USERNAME=Administrator ;admin username
ADMIN_PASSWORD=Administrator ;admin password
SERVER_PORT=4001 ;the server port on which the server is running. This is unused as of
now.
DATABASETYPE=Oracle ;the database type
The following sample code shows how to use the Design API to export mapping metadata from the
repository:
public void generateOutput() throws Exception {
MapFwkOutputContext outputContext = new
MapFwkOutputContext(MapFwkOutputContext.OUTPUT_FORMAT_XML,MapFwkOutputContext.OUTPUT_TARG
ET_FILE,mapFileName);
try {
intializeLocalProps();
}
catch (IOException ioExcp) {
System.err.println( "Error reading pcconfig.properties file." );
System.err.println( "The properties file should be in directory where
Mapping Framework is installed.");
System.exit( 0 );
}
boolean doImport = false;
if (runMode == 1) doImport = true;
rep.save(outputContext, doImport);
System.out.println( "Mapping generated in " + mapFileName );
}
In a data warehouse that uses a star schema, fact tables, such as a customer order or product shipment
table, quickly grow in volume. Dimension tables, such as a customer or product table, are comparatively
static and do not change very often. The example presented in this chapter uses a workflow to automate the
tasks for maintaining the slowly changing dimension (SCD) tables.
The SlowChangingDimensions application is written in Groovy, a high level language for the Java platform. It
calls the methods of the Design API to connect to a database table that you specify and extract metadata
about table to create the sources and target for the application. It uses the Design API to generate the
mapping logic to capture changes to the table and create the session and workflow to run the mapping.
The compiled classes run on JDK 1.5, which is installed with PowerCenter. You can run the application on the
command line and use a configuration file to set parameters.
102
The following sample code shows how the application is initialized with default values:
// Make helper object to call the methods like initLogger
SlowChangingDimensions sm = new SlowChangingDimensions()
// Read properties from file or use at least defaults
sm.initializeJmfdemoProps()
// Some auxiliary variables
def sourceConnection = props.defaultSourceConnection;
def targetConnection = props.defaultTargetConnection
def tableName = props.defaultTableName
if (options.h) { cli.usage();return}
if (options.s) sourceConnection = options.s
if (options.t) targetConnection = options.t
if (options.n) tableName = options.n
The following sample code shows how metadata is retrieved from the database:
// Create a DbTable Objects for the given table
DbTable tab= new DbTable(props.userName, tableName,
props.url.split(":")[1].toUpperCase());
props.Password, props.driverName);
The following sample code shows the sequence of calls in the main method:
// Now use the JMF to create the needed metadata
JMFSlowCD jmfFlow = new JMFSlowCD(tab, sourceConnection, targetConnection)
// read properties file for repository connection (to use pmrep for read & write)
initializePmrepProps()
if (options.l) { printAllConnections(); }
The createMapping() method creates a mapping with all dependent objects such as sources and target. The
createSession() method adds a session task to the workflow. The generateOutput() method enforces the
Design API XMLWriter to write the PowerCenter objects to a PowerCenter export file.
Note: In this example, the slowly changing dimension copy of the source table is equivalent to the target
table.
1. All primary key values from source table are set and key values from target table are not set.
This indicates a new record, which has to be inserted into the table with valid_From =
SESSIONSTARTTIME and valid_To = 31.12.9999.
2. All primary key values from the source are not set but key values from the target table are set.
This indicates that the record was deleted in the source table and has to be invalidated in the target
table by updating valid_To = SESSIONSTARTSTIME – 1 ns.
3. If both primary key values are set, the non primary key columns have to be compared.
If the primary keys are the same, there is no change in the data. If at least one new column value has
changed, the existing target record has to be updated and the source record has to be inserted.
The following figure shows the results of the comparison:
The OP column shows the resulting logical operation insert, update, and delete. The insert is a normal insert
in the database. The update is split into an update of the former record in the target table with changed
valid_To value and an insert of the new version. A delete is an update with an adjusted valid_To value.
Depending on the frequency of changes and update runs, the majority of operations read both tables and
perform few operations.
For efficiency, the data must be sorted in primary key order. The Source Qualifier transformation must have
the number of sorted ports set to the number of primary key columns and all primary key columns must be
sorted first. Also, the source copy Source Qualifier transformation must have the following source filter:
valid_to >= ’31.12.9999’
In the mapping, the Joiner transformation has a join condition for all primary key values depending on their
numbers. The Sorted input property is switched on to allow fast processing.
The Expression transformation adds timestamp values for sessionstarttime and the timestamp before
sessionstarttime (usually sessionstarttime – 1 ns). It also adds the name of the workflow that ran the
mapping. The mapping name is stored in a mapping variable.
A Router transformation identifies the different cases for update or insert. The router conditions include a
comparison of fields that must work with null values in the tables. The update strategy sets the insert or the
update strategy for the resulting data flows.
Retrieving Metadata
The constructor for DbTable tab gets the database user name, table name, and database type from the JDBC
URL. The readColumns method uses the JDBC URL, user name, password, and driver name to get a JDBC
connection type. The information is dynamically loaded and used at run-time.
The method readColumns calls the getMetaData() method to create a DatabaseMetaData instance that
retrieves all the required handles for metadata retrieval. The getMetaData() method uses the user name in
the JDBC connection to retrieve all column information for a given table in a schema. Similarly, the
getPrimaryKeys() method retrieves the primary keys and getImportedKeys() retrieves the foreign keys for the
table.
// Search for column information for this table in schema of given user
ResultSet rst = dbmd.getColumns("",
this.schemaName.toUpperCase(),this.tableName.toUpperCase(),"%");
// Search for PK information of table
ResultSet rstpk = dbmd.getPrimaryKeys("",
this.schemaName.toUpperCase(),
this.tableName.toUpperCase());
// Search for FK information of table
ResultSet rstfk = dbmd.getImportedKeys("",
this.schemaName.toUpperCase(),
this.tableName.toUpperCase());
The getPrimaryKeys() and getImportedKeys()methods return Java ResultSets. The iteration loop through the
ResultSet creates the DbColumn objects to store the information and add the corresponding attribute values.
The objects are stored in a HashArray object instance of mapColumns and a List object instance for the
column names.
The following sample code shows the iteration loop to create the DbColumn objects:
// generate DbColumn object for each column
while(rst.next())
{
String colname = rst.getString("COLUMN_NAME")
DbColumn col = new DbColumn(
this.schemaName.toUpperCase(),
this.tableName.toUpperCase(),
colname,
rst.getInt("ORDINAL_POSITION")
)
col.typeName = rst.getString("TYPE_NAME")
col.columnSize = rst.getInt("COLUMN_SIZE")
col.decimalDigits = rst.getInt("DECIMAL_DIGITS")
col.remarks = rst.getString("REMARKS")
col.isNullable = rst.getString("IS_NULLABLE") == "YES" ? true : false
col.isPrimaryKey = false
mapColumns[colname] = col
colNameList << colname
}
The Source Qualifier reads the data in primary key order. This requires that the primary key fields are defined
as the first ports, even if they are defined in the database in different order. The number of sorted ports must
be set to the number of primary key columns. To ensure the order of the ports, an auxiliary list object
pkColNameList stores the primary key column names in the right order. Fields that are not primary keys,
which are also called payload columns, are stored in a payloadColNameList object.
The following sample code shows how to create the list of primary key and payload columns:
/**
* For each PK component in PK result set pass over the found columns and set PK
attribute if needed
*/
while(rstpk.next())
{
// get column name of PK
String name=rstpk.getString("COLUMN_NAME")
// remember pk in seperate array
pkColNameList << name
// set the attributes for the column as neeeded for Primary Keys
mapColumns[name].identity{
it.isPrimaryKey = true
it.isNullable = false
}
}
// make an own list of all non pk columns in order of database
The following sample code shows how foreign key columns can be identified:
/**
* For each PK component in PK resultset pass over the found columns and set PK
* attribute if needed
*/
//Foreign Keys from EMP2
//==========================================================
//PKTABLE_CAT|PKTABLE_SCHEM|PKTABLE_NAME|PKCOLUMN_NAME|FKTABLE_CAT|FKTABLE_SCHEM|
FKTABLE_NAME|FKCOLUMN_NAME|KEY_SEQ|UPDATE_RULE|DELETE_RULE|FK_NAME|PK_NAME|DEFERRABILITY|
//--------------------------------------------------------
//null|SCOTT|DEPT|DEPTNO|null|SCOTT|EMP2|DEPTNO|1|null|1|FK_DEPTNO2|PK_DEPT|7|
while(rstfk.next())
{
// get column name of PK
String name=rstfk.getString("FKCOLUMN_NAME")
// Search over all columns, compare name and set attributes if column was found
mapColumns[name].identity{
it.refTable = rstfk.getString("PKTABLE_NAME")
it.refField = rstfk.getString("PKCOLUMN_NAME")
it.refSchema = rstfk.getString("PKTABLE_SCHEM")
it.remarks = "FK "+rstfk.getString("FK_NAME")+" -> PK " +rstfk.getString("PK_NAME")
}
The following sample code shows how to create the repository and folder instance:
rep = new Repository( "PCquick",
"Powercenter repository for PCquick",
"This repository is for the PCquick application" );
folder = new Folder( "PCQ",
"PCquick folder",
"This is a folder containing the objects from PCQ application" );
rep.addFolder( folder )
The following sample code shows how a mapping instance is created in a folder:
String name = "m_${dbTable.tableName}_SCD"
Creating Sources
After the mapping instance is created, the application creates the source and target object instances. It uses
the createSource and createSourceForTarget methods to create the source and target with the table
metadata as parameters in the DbTable object.
The createSourceForTarget method creates a second source instance appropriate for the target table setting.
The target connection types and names have to be specified, including the columns containing the
valid_From and valid_To timestamps and runtime information.
The createSource and createSourceForTarget methods return instances of a source object, which are added
to the folder. This step is not required. By default, the XML writer object in the Design API creates a copy of
the sources in a mapping in the folder.
The following sample code shows how instances of the source object are created:
// Now add the source to read from to the folder. The fields are derived from the
dbTable object
Source source = createSource(dbTable)
folder.addSource(source)
// From the create source derive a variant which have additional adminstrative fields
incl valid_From/To
Source source_tgt = createSourceForTarget(source)
folder.addSource(source_tgt)
The createSource() and createSourceForTarget method are similar. They differ in the fields that they can
contain. The createSource() method creates an empty Vector, which is filled with Field object instances, one
per column. The primary key columns are added first and then the payload columns. For each column, the
attributes are set to indicate primary or foreign keys. The Design API expects the key columns to be of type
string, not integer.
The following sample code shows how a vector of source fields is created:
Vector fields = new Vector();
// Iterate over all columns of given table,
// starting with pk, then with payload columns
[ tab.pkColNameList, tab.payloadColNameList].flatten().each {
log.log(Level.FINE,"Process column "+it)
Assign a new ConnectionInfo object instance to the source. The ConnectionInfo object instance contains an
instance of the class ConnectionProperties. Initialize the ConnectionProperties class with the source
connection type properties. Additionally, the property element
ConnectionPropsConstants.CONNECTIONNAME must be set to the value of the sourceConnectionName
string. These settings must be configured in the source object instance.
The following sample code shows how the source object is generated based on a Vector:
// Create source object with type sourceConnectionType
// using the name but prefix instance name with s_
Source source = new Source (name, name, name,
"s_"+name, new ConnectionInfo(sourceConnectionType)) ;
// assign the field set to the source (add columns)
source.setFields( fields )
// create a connection info for the given type
ConnectionInfo connectionInfo = new ConnectionInfo(sourceConnectionType)
// get the properties location for this connection info
ConnectionProperties connectionProperties = connectionInfo.getConnProps()
Creating Targets
Since the mapping writes to two instances of the target table in two streams, the application must create a
target instance for inserts and another target instance for updates. The two instances of the target object
have different names but use the same tables. The simplest way to create the targets is to create two empty
target objects, assign the properties for the relational target type and connection name, and then fill them
with the same fields used by the source. This ensures that all columns are included and both source and
target object represent the same metadata. PowerCenter ensures that the internal datatypes are correctly
mapped into the appropriate source and target types.
The following sample code shows how the targets are created and assigned to the mapping:
/**
* Create relational target
*/
Target createRelationalTarget( int type, String name, String prefix = "t_" ) {
Target target = new Target(name,
name, name,
prefix+name, new ConnectionInfo( type ) ) ;
target.getProps().
setProperty(ConnectionPropsConstants.CONNECTIONNAME,targetConnectionName);
return target;
}
The following sample code shows how to use the transformation helper instance:
// Create helper to simplify creation of further transformations
TransformHelper helper = new TransformHelper(mapping);
The Source Qualifier transformations have the same name as the associated source prefixed with “SQ_”. You
can use this name to retrieve the transformation object reference. Setting the number of sorted ports to the
The following sample code shows how the source qualifier is created and assigned the properties for sorted
read:
// Pipeline creation, one per source defined in the folder (of this object model)
// create mapping source and DSQ with helper and use it via RowSet
RowSet dsqRS = (RowSet) helper.sourceQualifier(source).getRowSets().get(0);
RowSet dsqRStgt = (RowSet) helper.sourceQualifier(source_tgt).getRowSets().get(0);
// set Sorted ports = num primary keys fields of source for both sq
// (target have only one record with selection criterion)
// for targetSq additionally set section criteria to VALID_TO=31.12.9999
// get properties object for sq of source table via name of the transformation (prefix
SQ_s_)
def sqProps=mapping.getTransformation("SQ_s_${dbTable.tableName}").getProperties()
// set property value sorted ports
sqProps.setProperty(TransformPropsConstants.NO_OF_SORTED_PORTS,
dbTable.pkColNameList.size().toString())
// get properties object for sq of target table via name of the transformation (prefix
SQ_s_)
def sqPropsTgt=mapping.getTransformation("SQ_s_$
{dbTable.tableName}"+cloneSuffix).getProperties()
// set properties for #sorted ports
sqPropsTgt.setProperty(TransformPropsConstants.NO_OF_SORTED_PORTS,
dbTable.pkColNameList.size().toString())
// set selection criterion
sqPropsTgt.setProperty(TransformPropsConstants.SOURCE_FILTER,
dbTable.tableName+cloneSuffix+".VALID_TO >= TO_DATE('31.12.9999','DD.MM.YYYY' )")
Additionally, the Joiner transformation requires the detail table to be input as InputSet packaged as a Vector
with one element. The ports of the detail type are prefixed with “IN_” to prevent port name collisions.
Compare the ports with prefixed names in the detail table with the ports with no prefix of the master table.
The join condition is created as string, which references the equality of all primary key columns in source and
“target” source, combined with an AND clause. Shortening the Join string by 5 characters truncates the
trailing “ AND ”.
The helper.join instance method generates a Joiner transformation with all the parameters, adds the
properties, and connects the ports of the Source Qualifier to the Joiner transformation.
The helper method returns a RowSet object, which can be retrieved with the getRowSet() instance method.
This method can return multiple output groups. Get the handle to the first group to get a handle for the output
RowSet.
The following sample code shows how the Joiner transformation instance is created with a full outer join and
sorted input:
// Full outer join, sorted input, PK values must be equal
// create a properties object
TransformationProperties props = new TransformationProperties(); // properties
// set full outer join
props.setProperty(TransformPropsConstants.JOIN_TYPE,"Full Outer Join")
// set sorted input to YES
// the detail input set must be in a vector for the join helper (by which reasons ever)
Vector vInputSets = new Vector()
vInputSets.add(dsqIS); // collection includes only the detail
For this example, the application requires an output only port NOW with the timestamp of session start time
and another port JUSTBEFORESESSIONSTART containing the timestamp just before the session start time.
The value for NOW is taken from the pre-defined variable SESSSTARTTIME. The value for
JUSTBEFORESESSIONSTART is calculated from the same variable minus 1 ns. The application also sets an
output port variable MAXDAY to the maximum available date. The rest of the added fields reference pre-
defined Mapping variables. The newly created column names are suffixed with “_new”.
To create the Expression transformation object, use the RowSet object representing the Joiner
transformation and the vector of the TransformField object as parameters of the helper method.
The following sample code shows the steps to create the Expression transformation instance:
// collect a set (vector) for the added transform fields in the coming expression
Vector vFields = new Vector()
vFields.add(new TransformField("date/time (29, 9) NOW= SESSSTARTTIME"))
vFields.add(new TransformField("date/time (29, 9) JUSTBEFORESESSIONSTART=
ADD_TO_DATE(SESSSTARTTIME,'NS',-1)"))
vFields.add(new TransformField("date/time (29, 9) DOOMSDAY=
MAKE_DATE_TIME(9999,12,31,23,59,59,999999999)"))
vFields.add(new TransformField("string (255, 0) pmmapping_new= \$PMMappingName"))
vFields.add(new TransformField("string (255, 0) pmsession_new= \$PMSessionName"))
vFields.add(new TransformField("string (255, 0) pmworkflow_new= \$PMWorkflowName"))
vFields.add(new TransformField("string (255, 0) pmreposervice_new= \
$PMRepositoryServiceName"))
The following sample code shows how to create a Router transformation with two output groups:
// Create a TransformGroup for the router (see TransformGroup creation in own method)
Vector vTransformGrp = new Vector();
vTransformGrp.add( insertTransformGroup() );
vTransformGrp.add( updateTransformGroup() );
// create a Router Transformation
OutputSet routerOutputSet = helper.router( expRS, vTransformGrp,
"Router_transform" );
In the application, the compare string is created and then embedded in a Groovy String. The method returns
the created TransformGroup object instance.
The following sample code shows how the insert group of Router transformation is created:
/*
* Creates Transformgroup for the router insert branch
*/
TransformGroup insertTransformGroup() {
// PK fields does not exists in target
// Any field was different in target and source
// Create the code snipplet to check for different fields for all nonPK fields
// Note that NULL does not compare well, so we have more stuff to check!
def fieldDiffer = ""
dbTable.payloadColNameList.each{
fieldDiffer +=
"""
(
(isNull(IN_${it}) AND NOT isNull(${it}))
OR
( NOT isNull(IN_${it}) AND isNull(${it}))
OR
(IN_${it} != ${it})
)
OR """
// ignore the trailing " AND" (5 chars)
fieldDiffer = fieldDiffer[0..fieldDiffer.size()-4]
// Create the group for insert branch
TransformGroup transGrp = new TransformGroup( "INSERT",
"""
IIF(${pkNull},
1,
IIF(${fieldDiffer},1,0)
The application creates a String[] excludes list with the names of the unnecessary ports, including the name
of the “target” source ports and auxiliary fields. The names of the output group ports are dynamically
generated and suffixed with the group number. The first group is the INSERT group, the second group is the
UPDATE group, and the third group is the DEFAULT group. The names of the ports are suffixed by the group
number. For example, ports in the INSERT group has the suffix 1 and ports in the UPDATE group has the
suffix 2.
The following sample code shows how the updateStrategy object is created with the helper.updateStrategy()
instance method and the exclude context. All rows flagged with DD_INSERT will be inserted.
// get the rowset for the INSERT group
RowSet insertRS = routerOutputSet.getRowSet( "INSERT" );
The following sample code shows how the linkMap can be used to create the input set:
// now link the columns of the rowset to the insertTargets. The simplest approach
// is to use a linkmap which a specific mapping will provide for the target fields
InputSet ilIS = new InputSet(updateStrategyInsertRS,
PortLinkContextFactory.getPortLinkContextByMap(
linkMapForInsert(updateStrategyInsertRS,insertTarget,"1")
)
)
// connect the update stategy transformation (write to target is not done as the target
was created to the mapping before)
mapping.writeTarget( ilIS, insertTarget);
The update data stream follows a similar algorithm as the insert stream. The update strategy flags the rows
for update and selects other ports. The helper method linkMapForUpdate creates the link map.
The following sample code shows how to use the helper method linkMapForInsert:
/**
* Makes a linkMap for fields between target and rowset in insert stream
* the 4 pm fields are mapped
* @param fromRS Rowset from the last transformation (usually update strategy)
* @param target target instance objects with the fields
* @param suffix Field suffix of fields, usually numbers generated by router group
* @return linkMap<Field,Field>
*/
Map <Field, Field> linkMapForInsert (RowSet fromRS, Target target, String suffix) {
// Make an empty linkMap for collecting the result
Map<Field, Field> linkMap = new LinkedHashMap<Field, Field>();
// Iterate over all target fields
target.getFields().each {
// take field name
def fName = it.getName()
// check for the pm fields
if ( fName ==~ /pmmapping/ ||
fName ==~ /pmsession/ ||
fName ==~ /pmworkflow/ ||
fName ==~ /pmreposervice/)
{
def f = fromRS.getField(fName+"_new"+suffix)
if (f)
linkMap.put(f,it)
}
// check for the valid_From field
else if (fName ==~ /valid_From/) {
def f = fromRS.getField("NOW"+suffix)
if (f)
linkMap.put(f,it)
}
// check for the valid_To field
else if (fName ==~ /valid_To/) {
def f = fromRS.getField("DOOMSDAY"+suffix)
if (f)
linkMap.put(f,it)
}
// all other fields
else {
def f = fromRS.getField("IN_"+fName+suffix)
Related Topics:
• “Sample Patterns for Regular Expressions for Port Propagation” on page 178
The following sample code shows the connection attributes for the target instance and the mapping object
added to the folder:
// Now we have to assign some properties to the sources and targets
// set the connection properties
source.getConnInfo().getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME,s
ourceConnectionName)
source_tgt.getConnInfo().getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNA
ME,targetConnectionName)
insertTarget.getConnInfo().getConnProps().setProperty(ConnectionPropsConstants.CONNECTION
NAME,targetConnectionName)
updateTarget.getConnInfo().getConnProps().setProperty(ConnectionPropsConstants.CONNECTION
NAME,targetConnectionName)
// set the update target to "Update as Update" to avoid write failure
updateTarget.getConnInfo().getConnProps().setProperty(ConnectionPropsConstants.RELATIONAL
_UPDATE_AS_UPDATE ,"YES")
// add mapping to folger
folder.addMapping(mapping);
• Application libraries. Includes all libraries and files required to run the application.
• JDK or JRE 1.5. You can use the JDK installed with PowerCenter.
• pcconfig.properties. Contains PowerCenter repository connection information required by any plug-in that
calls methods in the Design API.
• jmfdemo.properties. Contains the configuration information required to run the SlowChangingDimensions
sample application. This configuration file is required only for the SlowChangingDimensions sample
application. You do not need this file for other applications or plug-ins.
Property Description
jmfdemo.properties File
The sample application reads the configuration information from the jmfdemo.properties file. Before you run
SlowChangingDimension, modify the options in the jmfdemo.properties file to match your PowerCenter
environment.
Property Description
userName User name for the user account to log in to the repository.
logLevel Level of error messages to write to the log file. Default is INFO.
When you initially run the sample application, the application reads the connection information from the
PowerCenter repository configured in the jmfdemo.properties file. The application saves the connection
information in a file named pmConnections.xml. When you subsequently run the sample application, it reads
the connection information from the pmConnections.xml file instead of the repository.
To force the sample application to read the connection information from the PowerCenter repository, delete
the pmConnections.xml file or run the application with the -c or --getconnections option. When you run the
application with the -c option, the application reads the connection information from the PowerCenter
repository and overwrites the pmConnections.xml file with the new connection information.
1. Log in to the database where you want to create the target table and create the CUSTOMER_DIM_SLC
table.
You can also use the Designer to run the SQL statements to create the tables based on the target object.
2. Extract the zipped example files into any directory.
3. Go to the directory where you extract the files.
4. In the /JMFLIB folder, open the jmfdemo.properties file and modify the properties to match your
PowerCenter environment.
5. Run the SlowChangingDimension.bat file.
Groovy is an open source language that you can download from the following web site:
http://groovy.codehaus.org/Download
The sample application was created using Groovy version 1.5.6. You can use an IDE such as Eclipse or use
the Groovy compiler groovyc to compile on the command line. After you install Groovy, you can set the
GROOVY_HOME environment variable and use the compile.bat file included in the zip file to compile the
source.
• The target table is not created by default. You must create the target table before you run the application.
• The application does not verify that the target folder exists. An exception occurs if the target does not
exist and pmrep fails to import the XML file.
This appendix also provides examples of regular expressions that you can use in your port propagation
strategies.
Using Mapplets
The following sample code shows how to create a mapplet and use it in a mapping:
//creating a mapplet object
Mapplet mapplet = new Mapplet("MappletSample", "MappletSample",
"This is a MappletSample mapplet");
121
PortPropagationContext filterRSContext = PortPropagationContextFactory
.getContextForAllIncludeCols();
helperMapplet.outputTransform(inputSets, "outputIdPost");
//creating List of InputSet that will be used for creating mapplet tranformation
List<InputSet> inSets = new ArrayList<InputSet>();
inSets.add(new InputSet(dsqIdRS));
// write to target
mapping.writeTarget((RowSet) vMappRS.get(0), idPostTrg);
The following sample code shows how to refer to a shortcut to a source in a mapping:
ShortCut scSrc=new ShortCut("sc_src_age","shortcut to
source","Repo_rjain","temp","age",RepositoryObjectConstants.OBJTYPE_SOURCE,ShortCut.LOCAL
);
folder.addShortCut(scSrc);
mapping.addShortCut(scSrc);
scSrc.setRefObject(mySource); // mySource is the source object fetched from repository.
OutputSet outSet = helper.sourceQualifier(scSrc);
RowSet dsqRS = (RowSet) outSet.getRowSets().get( 0 );
e.printStackTrace();
} catch (IOException e)
{
e.printStackTrace();
}
The following sample code shows how to create a PowerCenter session parameter file:
List<MappingVariable> vMappingVars = new ArrayList<MappingVariable>();
// While creating mapping variables, we will use transformation data types as mapping
variables
// belong to mapping, not to source or target.
vMappingVars.add(mpVar1);
vMappingVars.add(mpVar2);
vMappingVars.add(mpVar3);
vMappingVars.add(mpVar4);
vMappingVars.add(mpVar5);
vMappingVars.add(mpVar6);
Using Partitions
The following sample code shows how to use partitions in a mapping:
protected void createMappings() throws Exception {
// create a mapping
mapping = new Mapping( "PartitioningExample", " PartitioningExample ", "This is
a partitioning example." );
setMapFileName( mapping );
TransformHelper helper = new TransformHelper( mapping );
// creating DSQ Transformation
OutputSet outSet = helper.sourceQualifier( employeeSrc );
RowSet dsqRS = (RowSet) outSet.getRowSets().get( 0 );
TransformationProperties props = new TransformationProperties();
return ppd;
}
import java.util.ArrayList;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.InputSet;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.NativeDataTypes;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.StringConstants;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.Transformation;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationConstants;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationContext;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
import com.informatica.powercenter.sdk.mapfwk.metaextension.MetaExtension;
public SAPWriterSample() {
target = null;
source = null;
dsqTransform = null;
}
@Override
protected void createSources() {
// TODO Auto-generated method stub
this.source = this.createOracleJobSource("OrclSRC_SAPWRTER");
this.folder.addSource(this.source);
}
@Override
protected void createTargets() {
// TODO Auto-generated method stub
// List<Field> fields = new ArrayList<Field>();
//
// Field mandtField = new Field( "JOB_ID", "Client", "Client",
// NativeDataTypes.SAP.CLNT, "3", "0",
// FieldKeyType.PRIMARY_KEY, FieldType.SOURCE, true );
// fields.add( mandtField );
//
// Field eblnField = new Field( "EBELN", "Purchasing document number",
"Purchasing document number",
// NativeDataTypes.SAP.CHAR, "10", "0",
// FieldKeyType.PRIMARY_FOREIGN_KEY, FieldType.SOURCE, true );
// fields.add( eblnField );
//
// Field bukrsField = new Field( "BUKRS", "Company Code", "Company Code",
// NativeDataTypes.SAP.CHAR, "4", "0",
// FieldKeyType.NOT_A_KEY, FieldType.SOURCE, true );
// fields.add( bukrsField );
//
// Field pincrField = new Field( "PINCR", "Item number interval", "Item number
interval",
// NativeDataTypes.SAP.NUMC, "5", "0",
// FieldKeyType.NOT_A_KEY, FieldType.SOURCE, false );
// fields.add( pincrField );
this.folder.addTarget(this.target);
//
@Override
protected void createMappings() throws Exception {
// TODO Auto-generated method stub
mapping = new Mapping("Mapping_SAP_Table_Writer", "Mapping_SAP_Table_Writer",
"This is Mapping_SAP_Table_Writer");
setMapFileName(mapping);
mapping.writeTarget(dsqRS, this.target);
folder.addMapping(mapping);
}
@Override
protected void createSession() throws Exception {
// TODO Auto-generated method stub
session = new Session( "session_For_SAP_Table_Writer",
"Session_For_SAP_Table_Writer",
"This is session for SAP_Table_Writer" );
ConnectionInfo src_connInfo = new ConnectionInfo(SourceTargetType.Oracle);
src_connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME,
"Oracle");
session.addConnectionInfoObject(source, src_connInfo);
@Override
protected void createWorkflow() throws Exception {
// TODO Auto-generated method stub
workflow = new Workflow( "Workflow_for_SAP_Table_Writer",
"Workflow_for_SAP_Table_Writer",
"This workflow for SAP_Table_Writer" );
workflow.addSession( session );
folder.addWorkFlow( workflow );
}
The following sample code shows how you can use the same source for two different pipelines:
public void createMappings() throws Exception {
// create a mapping
mapping = new Mapping( "SourceCloneMapping", "mapping", "Testing SourceClone sample" );
setMapFileName( mapping );
TransformHelper helper = new TransformHelper( mapping );
// creating DSQ Transformation
OutputSet outSet = helper.sourceQualifier( employeeSrc );
RowSet dsqRS = (RowSet) outSet.getRowSets().get( 0 );
// write to target
mapping.writeTarget( dsqRS, outputTarget );
// clone the source and target
Source empSrcClone = (Source) employeeSrc.clone();
empSrcClone.setInstanceName( empSrcClone.getName() + "_clone" );
Target targetClone = (Target) outputTarget.clone();
targetClone.setInstanceName( outputTarget.getName() + "_clone" );
mapping.addTarget( targetClone );
// create DSQ and write to target
outSet = helper.sourceQualifier( empSrcClone );
dsqRS = (RowSet) outSet.getRowSets().get( 0 );
mapping.writeTarget( dsqRS, targetClone );
folder.addMapping( mapping );
}
Sources
The following code examples show how to create and use different types of sources in a mapping.
source.createField("DTL__CAPXRESTART1",srcGrp,"","","PACKED","25","0",FieldKeyType.NOT_A_
KEY,FieldType.SOURCE, false);
source.createField("DTL__CAPXRESTART2",srcGrp,"","","string","10","0",FieldKeyType.NOT_A_
KEY, FieldType.SOURCE, false);
source.setMetaExtensionValue("Access Method", "V");
source.setMetaExtensionValue("Map Name", "ct_ALLDTYPES_SRC");
source.setMetaExtensionValue("Original Name" , "ALLDTYPES_SRC");
source.setMetaExtensionValue("Original Schema", "PWXUDB");
List<ConnectionInfo> connInfos = source.getConnInfos();
for (int i=0;i<connInfos.size();i++)
{
ConnectionInfo connInfo = (ConnectionInfo) connInfos.get(i);
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME,
"myTestConnection");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"myDBName");
}
}catch (RepoOperationException e)
{// TODO Auto-generated catch block
e.printStackTrace();
}catch (MapFwkException e)
{// TODO Auto-generated catch block
e.printStackTrace();
}
folder.addSource(source);
this.mapFileName = "PowerExchangeSource.xml";
}
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.OutputSet;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TransformField;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
/**
* This example applies a simple expression transformation on the Employee table
* and writes to a target.
*
*/
public class NetezzaSample extends Base {
// //////////////////////////////////////////////////////////////
// Instance variables
// //////////////////////////////////////////////////////////////
protected Source employeeSrc;
protected Target outputTarget;
/**
* Create sources
*/
protected void createSources() {
employeeSrc = this.createNetezzaSource();
folder.addSource( employeeSrc );
}
/**
* Create targets
*/
protected void createTargets() {
outputTarget = this.createRelationalTarget(SourceTargetType.Flat_File, "Target");
}
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSession()
*/
protected void createSession() throws Exception {
session = new Session( "Session_For_netezza", "Session_For_netezza",
"This is session for Netezza" );
session.setMapping( this.mapping );
/* Configure all the Netezza connection related properties for DSQ at Session
level */
props.put(ConnectionPropsConstants.CONNECTIONNAME, "Netezza");
props.put(ConnectionPropsConstants.USER_DEFINED_JOIN, "asdasd");
props.put(ConnectionPropsConstants.NUMBER_OF_SORTED_PORTS, "10");
props.put(ConnectionPropsConstants.TRACING_LEVEL, "Verbose Data");
props.put(ConnectionPropsConstants.PRE_SQL, "sdada");
props.put(ConnectionPropsConstants.POST_SQL, "TD_OPER_CLI");
props.put(ConnectionPropsConstants.SOURCE_FILTER, "asdad");
session.addSessionTransformInstanceProperties(employeeSrc, props);
}
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createWorkflow()
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow( "Workflow_netezza", "Workflow_netezza",
"This workflow for Netezza" );
workflow.addSession( session );
folder.addWorkFlow( workflow );
}
}
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.DSQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.NativeDataTypes;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.SAPASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.SAPFunction;
import com.informatica.powercenter.sdk.mapfwk.core.SAPScalarInputValueType;
import com.informatica.powercenter.sdk.mapfwk.core.SAPStructure;
import com.informatica.powercenter.sdk.mapfwk.core.SAPStructureField;
/**
* This is a sample program that demonstrates how to create a mapping
* with an SAP source and an SAP application source qualifier.
*
* 1. Create a mapping with following structure:
* SAP source --> ASQ --> Expr Transform --> Flatfile Target
*
* 2. Create a session for the mapping.
*
* 3. Create workflow using the session created in step 2.
*
*/
public class SAPMappingExample extends Base {
/*
* Create an SAP source
*/
protected void createSources() {
ekkoSAPSrc = this.createSAPekkoSource("sophie");
folder.addSource(ekkoSAPSrc);
}
/*
* Create a Flatfile target
*/
protected void createTargets() {
fileTgt = this.createFlatFileTarget("Output1");
}
/*
* Create a mapping with the following structure:
* SAP Source --> ASQ --> Flatfile Target
*/
protected void createMappings() throws Exception {
// create DSQ
dsq = (SAPASQTransformation) ekkoSAPSrc.createASQTransform();
mapping.addTransformation(dsq);
//Create Structures
/**
* SAPFunction with ScalarInput and Table
*/
SAPFunction func2 = new SAPFunction("Z_PM_RFC_RUN_PROGRAM", "A RFC callable
function to run programs for staging data in files.");
dsq.addSAPFunction(func2);
/**
* SAPFunction with Table and Changing
*/
SAPFunction func4 = new SAPFunction("ZTEMP_TABPARAM", "TABPARAM");
/**
* SAP Function containing scalar input and scalar output.
*/
SAPFunction func5 = new SAPFunction("ZCHAR_UNISCALAR", "RFC for unicode
testing");
SAPStructureField func5fld1 = new SAPStructureField("ZCHAR_UNI-FKEY",
NativeDataTypes.SAP.CHAR, "FKEY_OUT", "10", "0", false);
func5.addSAPFunctionScalarOutput(func5fld1);
SAPStructureField func5fld2 = new SAPStructureField("ZZCHAR_UNI-FCHAR",
NativeDataTypes.SAP.CHAR, "FCHAR_OUT", "255", "0", false);
func5.addSAPFunctionScalarOutput(func5fld2);
SAPStructureField func5fld3 = new SAPStructureField("ZCHAR_UNI-FKEY",
NativeDataTypes.SAP.CHAR, "FKEYTYPE_OUT", "10", "0", false);
func5.addSAPFunctionScalarOutput(func5fld3);
SAPStructureField func5fld4 = new SAPStructureField("ZCHAR_UNI-FCHAR",
NativeDataTypes.SAP.CHAR, "FCHARTYPE_OUT", "255", "0", false);
func5.addSAPFunctionScalarOutput(func5fld4);
dsq.addSAPFunction(func5);
// write to target
mapping.writeTarget(expRS, this.fileTgt);
folder.addMapping(mapping);
}
/*
* Create a session
*/
protected void createSession() throws Exception {
session = new Session("SampleSAPSession", "SampleSAPSession",
"SAP session with sap source");
session.setMapping(mapping);
connInfo.getConnProps().setProperty( ConnectionPropsConstants.SESSION_EXTENSION_NAME,
StringConstants.SAP_STAGING_READER );
connInfo.getConnProps().setProperty( ConnectionPropsConstants.STAGE_FILE_DIRECTORY,
"STAGE_FILE_DIRECTORY" );
connInfo.getConnProps().setProperty(ConnectionPropsConstants.SOURCE_FILE_DIRECTORY,"SOURC
E_FILE_DIRECTORY");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.STAGE_FILE_NAME,
"asq_ekko" );
connInfo.getConnProps().setProperty( ConnectionPropsConstants.REINITIALIZE_THE_STAGE_FILE
, "YES" );
connInfo.getConnProps().setProperty( ConnectionPropsConstants.PERSIST_THE_STAGE_FILE,
"YES" );
connInfo.getConnProps().setProperty( ConnectionPropsConstants.RUN_SESSION_IN_BACKGROUND,
"YES" );
/*
* Create workflow using SAP session
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow("WF_Sample_SAP_Workflow",
"WF_Sample_SAP_Workflow", "Workflow for sap session");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
/*
* Sample program
*/
public static void main(String[] args) {
try {
SAPMappingExample sapMapping = new SAPMappingExample();
if (args.length > 0) {
if (sapMapping.validateRunMode(args[0])) {
sapMapping.execute();
}
} else {
sapMapping.printUsage();
}
} catch (Exception e) {
e.printStackTrace();
System.err.println("Exception is: " + e.getMessage());
}
}
}
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.ABAPProgram;
import com.informatica.powercenter.sdk.mapfwk.core.DSQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.MapFwkOutputContext;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.NativeDataTypes;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.SAPASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.SAPFunction;
import com.informatica.powercenter.sdk.mapfwk.core.SAPProgramFlowCodeBlock;
import com.informatica.powercenter.sdk.mapfwk.core.SAPProgramFlowSource;
import com.informatica.powercenter.sdk.mapfwk.core.SAPScalarInputValueType;
import com.informatica.powercenter.sdk.mapfwk.core.SAPStructure;
import com.informatica.powercenter.sdk.mapfwk.core.SAPStructureField;
import com.informatica.powercenter.sdk.mapfwk.core.SAPStructureInstance;
import com.informatica.powercenter.sdk.mapfwk.core.SAPVariable;
import com.informatica.powercenter.sdk.mapfwk.core.SAPVariableCategory;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.StringConstants;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TransformField;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
/**
* This is a sample program that demonstrates how to create a mapping
* with an SAP source and SAP application source qualifier.
*
* This sample program shows how to generate and install an ABAP program.
* The SAP functions and variables that you create for the ABAP program must
* match the functions and variables defined in the SAP server.
* If the names and values for the SAP functions do not match
* the names and values in the SAP server, the generated ABAP program
* will have incorrect values and will not extract data from the SAP server
* correctly. This will cause the workflow to fail.
*
* 1. Create a mapping with following structure:
* SAP source --> ASQ --> Expr Transform --> Flatfile Target
*
* 2. Create SAP functions and structures.
*
* 3. Create a session for the mapping.
*
* 4. Create workflow using the session created in step 2.
*
*/
public class SAPMappingWithFunctions extends Base {
/*
* Create a SAP source
*/
protected void createSources() {
sapSrcA004 = this.createSAPekkoSource("in23sapec6");
folder.addSource(sapSrcA004);
}
/*
* Create a Flatfile target
*/
protected void createTargets() {
fileTgt = this.createFlatFileTarget("Output1");
}
/*
* Create a mapping with the following structure
* SAP Source --> ASQ --> Flatfile
* Target
*/
protected void createMappings() throws Exception {
// create DSQ
dsq = (SAPASQTransformation) sapSrcA004.createASQTransform();
mapping.addTransformation(dsq);
// Create Structures
// createSAPStructures();
function.addSAPFunctionScalarInput(struc1);
function.addSAPFunctionScalarInput(struc2);
dsq.addSAPFunction(function);
// write to target
mapping.writeTarget(expRS, this.fileTgt);
folder.addMapping(mapping);
}
/*
* Create a session
*/
protected void createSession() throws Exception {
session = new Session("SampleSAPSession12", "SampleSAPSession12", "SAP session
with sap source");
session.setMapping(mapping);
connInfo.getConnProps().setProperty(ConnectionPropsConstants.SESSION_EXTENSION_NAME,
StringConstants.SAP_STAGING_READER);
connInfo.getConnProps().setProperty(ConnectionPropsConstants.STAGE_FILE_DIRECTORY,
"STAGE_FILE_DIRECTORY");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.SOURCE_FILE_DIRECTORY,
"SOURCE_FILE_DIRECTORY");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.STAGE_FILE_NAME,
"asq_ekko");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.REINITIALIZE_THE_STAGE_FILE,
"YES");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.RUN_SESSION_IN_BACKGROUND,
"YES");
session.addConnectionInfoObject(dsq, connInfo);
session.addConnectionInfoObject(dsq, ftpConnInfo);
}
/*
* Create workflow using SAP session
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow("WF_Sample_SAP_Workflow2", "WF_Sample_SAP_Workflow2",
"Workflow for sap session2");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
/**
* This method generates the output xml
*
* @throws Exception
* exception
*/
public void generateOutput() throws Exception {
MapFwkOutputContext outputContext = new
MapFwkOutputContext(MapFwkOutputContext.OUTPUT_FORMAT_XML,
MapFwkOutputContext.OUTPUT_TARGET_FILE, mapFileName);
try {
intializeLocalProps();
} catch (IOException ioExcp) {
System.err.println(ioExcp.getMessage());
System.err.println("Error reading pcconfig.properties file.");
System.err.println("pcconfig.properties file is present in \\javamappingsdk\
\samples directory");
System.exit(0);
}
/*
* Sample program
*/
public static void main(String[] args) {
try {
SAPMappingWithFunctions sapMapping = new SAPMappingWithFunctions();
if (args.length > 0) {
if (sapMapping.validateRunMode(args[0])) {
sapMapping.execute();
}
} else {
sapMapping.printUsage();
}
} catch (Exception e) {
The following sample code shows how to use one source qualifier to read two related sources:
// Logic to create a DSQ Transformation using 2 sources. They
// should satisfy PKFK constraint.
List<InputSet> inputSets = new ArrayList<InputSet>();
InputSet itemIS = new InputSet( itemSource );
InputSet productIS = new InputSet( productSource );
inputSets.add( itemIS );
inputSets.add( productIS );
TransformationContext tc = new TransformationContext( inputSets );
Transformation dsqTransform = tc.createTransform( TransformationConstants.DSQ,
"CMN_DSQ" );
// RowSet of combined transformation
RowSet dsqRS = (RowSet) dsqTransform.apply().getRowSets().get( 0 );
mapping.addTransformation( dsqTransform );
// Create an Expression Transformation
TransformHelper helper = new TransformHelper( mapping );
TransformField orderCost = new TransformField(
"decimal(24,0) OrderCost = (Price*Wholesale_cost)" );
RowSet expRowSet = (RowSet) helper.expression( dsqRS, orderCost,
"comb_exp_transform" )
.getRowSets().get( 0 ); // target
// write to target
mapping.writeTarget( expRowSet, outputTarget );
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.InputSet;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.SAPASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.SAPProgramFlowCodeBlock;
import com.informatica.powercenter.sdk.mapfwk.core.SAPProgramFlowSource;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationConstants;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationContext;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
/**
* This is a sample program that demonstrates how to create a mapping
* with two SAP sources and a common SAP application source qualifier.
*
* 1. Create a mapping with following structure:
* (SAP source1 , SAP source2) --> ASQ --> Flatfile Target
*
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources() {
ekpoSource = this.createSAPekpoSource("in23sapec6");
folder.addSource(ekpoSource);
ekkoSource = this.createSAPekkoSource("in23sapec6");
folder.addSource(ekkoSource);
}
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createTargets()
*/
protected void createTargets() {
outputTarget = this.createFlatFileTarget("CMN_DSQ_Output");
}
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
protected void createMappings() throws Exception {
mapping = new Mapping("CMN_SAP_ASQ", "CMN_SAP_ASQ", "This is CMN_ASQ sample");
setMapFileName(mapping);
// Logic to create a ASQ Transformation using 2 sources. They
// should satisfy PKFK constraint or a join override.
List<InputSet> inputSets = new ArrayList<InputSet>();
InputSet ekkoIS = new InputSet(ekkoSource);
Field ebelnFld = ekkoSource.getField("BUKRS");
ebelnFld.setFieldKeyType(FieldKeyType.FOREIGN_KEY);
ebelnFld.setReferenceContraint(ekpoSource.getName(), ekpoSource.getDBName(),
"BUKRS");
InputSet ekpoIS = new InputSet(ekpoSource);
inputSets.add(ekkoIS);
inputSets.add(ekpoIS);
TransformationContext tc = new TransformationContext(inputSets);
asqTransform = (SAPASQTransformation)
tc.createTransform(TransformationConstants.ASQ_SAP, "ASQ_CMN_SAP");
// ASQ will create the default program flow objects for sources. To
// apply any program flow object properties,
// fetch the objects and apply. Sample shown below
SAPProgramFlowSource pfSrc = (SAPProgramFlowSource)
asqTransform.getProgramFlowObject("EKKO");
pfSrc.setDynamicFilter("EKKO.BUKRS = \"XYZ\"");
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createWorkflow()
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow("Workflow_for_CMN_ASQ", "Workflow_for_CMN_DSQ", "This
workflow for joiner");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
/*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSession()
*/
protected void createSession() throws Exception {
session = new Session("Session_For_CMN_ASQ", "Session_For_CMN_ASQ", "This is
session for CMN_ASQ");
session.setMapping(this.mapping);
}
}
Transformations
The following code examples show how to create different types of transformations.
Aggregator
// create an aggregator Transformation
// calculate cost per order using the formula
// SUM((UnitPrice * Quantity) * (100 - Discount1) / 100) grouped by
// OrderId
TransformField cost = new TransformField(
"decimal(15,0) total_cost = (SUM((UnitPrice * Quantity) * (100 - Discount) /
100))");
RowSet aggRS = (RowSet) helper.aggregate(dsqRS, cost,
new String[] { "OrderID" }, "agg_transform").getRowSets().get(0);
// write to target
mapping.writeTarget(aggRS, this.outputTarget);
Data Masking
//creating a mapping with Data Masking transformation.
protected void createMappings() throws Exception {
mapping = new Mapping("DMO003", "DataMaskTransformationTest", "Mapping for
DataMaskTransformationTest");
setMapFileName(mapping);
dmo.setMetaExtensionValue(StringConstants.META_EXTENTION_MASKING_RULES,
dmo.getPortinfo().getXml());
}
/*
* From dictionary with flat file port dictionary and flat file domain dictionary
*/
// dmo.addEMailMaskFromDictionary("JOB_TITLE", "", "1",
EmailDictionary.DomainType.RANDOM,
// "FNAME", "10", "TRUE", "FALSE", "FNAME",
// "10",flatfilePortDictionary,flatfileDomainDictionary);
/*
* From mapping with relational domain dictionary
*/
// dmo.addEmailMaskFromMapping("JOB_TITLE", "", "1",
EmailDictionary.DomainType.RANDOM,
// "out_JOB_ID", "10", "TRUE", "FALSE", "out_JOB_ID", "10",
relationalDomainDictionary);
/*
* From mapping with constant domain
*/
// dmo.addEmailMaskFromMapping("JOB_TITLE", "gmail.com", "1",
// EmailDictionary.DomainType.CONSTANT, "out_JOB_ID", "10",
// "TRUE", "FALSE", "out_JOB_ID", "10", null);
/*
* Standard email masking
*/
dmo.addStandardEmailMask("JOB_TITLE", "TRUE", "TRUE");
dmo.setMetaExtensionValue(StringConstants.META_EXTENTION_MASKING_RULES,
dmo.getPortinfo().getXml());
}
Expression
// create an expression Transformation
// the fields LastName and FirstName are concataneted to produce a new
// field fullName
String expr = "string(80, 0) fullName= firstName || lastName";
TransformField outField = new TransformField( expr );
String expr2 = "integer(10,0) YEAR_out=IIF(EmployeeID=0,2000,2001)";
TransformField outField2 = new TransformField( expr2 );
List<TransformField> transFields = new ArrayList<TransformField>();
transFields.add( outField );
transFields.add( outField2 );
RowSet expRS = (RowSet) helper.expression( dsqRS, transFields,
"exp_transform" ).getRowSets()
.get( 0 );
// write to target
mapping.writeTarget( expRS, outputTarget );
Router
TransformHelper helper = new TransformHelper( mapping );
// Create a TransformGroup
List<TransformGroup> transformGrps = new ArrayList<TransformGroup>();
TransformGroup transGrp = new TransformGroup( "LONDON_GROUP", "City =
'London'" );
transformGrps.add( transGrp );
transGrp = new TransformGroup( "SEATTLE_GROUP", "City = 'Seattle'" );
transformGrps.add( transGrp );
// creating DSQ Transformation
OutputSet itemOSet = helper.sourceQualifier( employeeSrc );
RowSet employeeRowSet = (RowSet) itemOSet.getRowSets().get( 0 );
// create a Router Transformation
OutputSet routerOutputSet = helper.router( employeeRowSet, transformGrps,
"Router_transform" );
// write to target
RowSet outRS = routerOutputSet.getRowSet( "LONDON_GROUP" );
if (outRS != null)
mapping.writeTarget( outRS, londonOutputTarget );
outRS = routerOutputSet.getRowSet( "SEATTLE_GROUP" );
if (outRS != null)
mapping.writeTarget( outRS, seattleOutputTarget );
outRS = routerOutputSet.getRowSet( "DEFAULT" );
if (outRS != null)
mapping.writeTarget( outRS, defaultOutputTarget );
Sequence Generator
// create a Sequence Generator Transformation
RowSet seqGenRS = (RowSet) helper.sequenceGenerator( "sequencegenerator_transform" )
.getRowSets().get( 0 );
List<InputSet> vinSets = new ArrayList<InputSet>();
vinSets.add( new InputSet( dsqRS ) );
vinSets.add( new InputSet( seqGenRS ) );
// write to target
mapping.writeTarget( vinSets, outputTarget );
Sorter
// create a sorter Transformation
RowSet sorterRS = (RowSet) helper.sorter( dsqRS, new String[] { "FirstName",
"LastName" },
new boolean[] { true, false }, "sorter_transform" ).getRowSets().get( 0 );
SQL Transformation
List<TransformField> transformFields = new ArrayList<TransformField>();
Field field1 = new
Field("NewField1","NewField1","",TransformationDataTypes.STRING,"20","0",
FieldKeyType.NOT_A_KEY,FieldType.TRANSFORM,false);
field1.setAttributeValues(SQLTransformation.ATTR_SQLT_PORT_ATTRIBUTE, new
Integer(2));
transformFields.add(tField1);
transformFields.add(tField2);
transformFields.add(tField3);
mapping.writeTarget(sqlRS, this.outputTarget);
Stored Procedure
// create a stored procedure transformation
List<TransformField> transformFields = new ArrayList<TransformField>();
Field field1 = new Field( "RetValue", "RetValue", "This is return value",
TransformationDataTypes.INTEGER, "10", "0", FieldKeyType.NOT_A_KEY,
FieldType.TRANSFORM, false );
TransformField tField1 = new TransformField( field1,PortType.RETURN_OUTPUT );
transformFields.add( tField1 );
Field field2 = new Field( "nID1", "nID1", "This is the ID field",
TransformationDataTypes.INTEGER, "10", "0", FieldKeyType.NOT_A_KEY,
FieldType.TRANSFORM, false );
TransformField tField2 = new TransformField( field2, PortType.INPUT );
// transformFields.add( tField2 );
Field field3 = new Field( "outVar", "outVar", "This is the Output field",
TransformationDataTypes.STRING, "20", "0", FieldKeyType.NOT_A_KEY,
FieldType.TRANSFORM, false );
TransformField tField3 = new TransformField( field3, PortType.INPUT_OUTPUT );
transformFields.add( tField3 );
java.util.Hashtable<Field,Object> link = new java.util.Hashtable<Field,Object>();
link.put( dsqRS.getField( "ItemId" ), field2 );
PortLinkContext linkContext =
PortLinkContextFactory.getPortLinkContextByMap( link );
RowSet storedProcRS = (RowSet) helper.storedProc( new InputSet( dsqRS,
linkContext ),
transformFields, "SampleStoredProc", "Sample Stored Procedure
Transformation" )
.getRowSets().get( 0 );
// write to target
mapping.writeTarget( storedProcRS, this.outputTarget );
Transaction Control
// create an Transaction Control Transformation
String condition = "IIF(EmployeeID>10,TC_COMMIT_AFTER,TC_CONTINUE_TRANSACTION)";
Unconnected Lookup
// create an unconnected lookup transformation
// the fields LastName and FirstName are concataneted to produce a new
// field fullName
String expr = "string(80, 0) firstName1= IIF(ISNULL(:LKP.lkp(ItemId,
Item_Name)), "
+ "DD_UPDATE, DD_REJECT)";
TransformField outField = new TransformField( expr );
RowSet expRS = (RowSet) helper.expression( dsqRS, outField,
"exp_transform" ).getRowSets()
.get( 0 );
// create a unconnected lookup transformation
// set the return port
Field retField = new Field( "Item_No1", "Item_No", "",
TransformationDataTypes.INTEGER, "10",
"0", FieldKeyType.PRIMARY_KEY, FieldType.TRANSFORM, false );
// create input and output fields
List<Field> input = new ArrayList<Field>();
List<Field> output = new ArrayList<Field>();
createUncLkpFields( input, output );
// create an unconnected lookup
String condition = "ItemId = EmployeeID";
UnconnectedLookup uncLkp = helper.unconnectedLookup( "lkp", retField, input,
condition,
employeeSrc );
uncLkp.setPortType( "EmployeeID", PortType.LOOKUP );
mapping.addTransformation(uncLkp);
// write to target
mapping.writeTarget( expRS, outputTarget );
Union
RowSet rsGroupFld = new RowSet();
Field field1 = new Field( "ItemId", "ItemId", "",
TransformationDataTypes.INTEGER, "10", "0",
FieldKeyType.NOT_A_KEY, FieldType.TRANSFORM, false );
rsGroupFld.addField( field1 );
Field field2 = new Field( "Item_Name", "Item_Name", "",
TransformationDataTypes.STRING, "72",
"0", FieldKeyType.NOT_A_KEY,FieldType.TRANSFORM, false );
rsGroupFld.addField( field2 );
Field field3 = new Field( "Item_Price", "Item_Price", "",
TransformationDataTypes.DECIMAL, "10",
"2", FieldKeyType.NOT_A_KEY, FieldType.TRANSFORM, false );
rsGroupFld.addField( field3 );
// creating DSQ Transformation
OutputSet itemOSet = helper.sourceQualifier( itemsSrc );
RowSet itemRowSet = (RowSet) itemOSet.getRowSets().get( 0 );
// itemRowSet.setGroupName("ITEM_GROUP");
OutputSet productOSet = helper.sourceQualifier( productSrc );
RowSet productRowSet = (RowSet) productOSet.getRowSets().get( 0 );
// productRowSet.setGroupName("PRODUCT_GROUP");
// Port propogation for Items and products
PortPropagationContext itemRSContext = PortPropagationContextFactory
.getContextForIncludeCols( new String[] { "ItemId", "Item_Name",
"Price" } );
PortPropagationContext productRSContext = PortPropagationContextFactory
.getContextForIncludeCols( new String[] { "Item_No", "Item_Name",
"Cust_Price" } );
List<InputSet> inputSets = new ArrayList<InputSet>();
inputSets.add( new InputSet( itemRowSet, itemRSContext ) );
inputSets.add( new InputSet( productRowSet, productRSContext ) );
// create a Union Transformation
Update Strategy
// create an update strategy transformation
// Insert only if the city is 'Seattle' else reject it
RowSet filterRS = (RowSet) helper.updateStrategy( dsqRS,
"IIF(City = 'Seattle', DD_INSERT, DD_REJECT )", "updateStrategy_transform" )
.getRowSets().get( 0 );
XML Generator
/*
* Copyright Informatica Corporation.
*/
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.NativeDataTypes;
import com.informatica.powercenter.sdk.mapfwk.core.OutputSet;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
/**
* Uses the XML Generator transformation in a mapping.
*
*/
public class XMLParserGeneratorSample extends Base {
// ////////////////////////////////////////////////////////////////
// instance variables
// ////////////////////////////////////////////////////////////////
protected Mapping mapping = null;
protected Source jobSourceObj = null;
protected Target targetObj = null;
/*
* Creates a source.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources() {
jobSourceObj = createOracleJobSource("Oracle_Source");
folder.addSource(jobSourceObj);
}
/*
* Creates a target.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createTargets()
*/
protected void createTargets() {
/*
* Creates a mapping.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
protected void createMappings() throws Exception {
// create a mapping object
mapping = new Mapping("XMLParserGeneratorTransformation",
"XMLParserGeneratorTransformation", "Mapping for XMLParserGeneratorTransformation");
setMapFileName(mapping);
TransformHelper helper = new TransformHelper(mapping);
// create DSQ
RowSet dsqRowSet = (RowSet)
helper.sourceQualifier(this.jobSourceObj).getRowSets().get(0); //only one field is
given as input.
mapping.writeTarget(outputGenTarget, this.targetObj);
folder.addMapping(mapping);
}
/*
* Creates a session to run the mapping.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSession()
*/
protected void createSession() throws Exception {
session = new Session("Session_For_CustomACTIVE", "Session_For_CustomACTIVE",
"This is session for Custom Transformation");
session.setMapping(mapping);
}
/*
* Creates a workflow.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createWorkflow()
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow("Workflow_for_CustomTransformation",
"Workflow_for_CustomTransformation", "This workflow for Custom Transformation");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
/**
* @param args
*/
public static void main(String[] args) {
try {
XMLParserGeneratorSample customActive = new XMLParserGeneratorSample();
if (args.length > 0) {
if (customActive.validateRunMode(args[0])) {
customActive.execute();
}
} else {
customActive.printUsage();
}
} catch (Exception e) {
e.printStackTrace();
System.err.println("Exception is: " + e.getMessage());
}
}
}
XML Parser
/*
* Copyright Informatica Corporation.
*/
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.NativeDataTypes;
import com.informatica.powercenter.sdk.mapfwk.core.OutputSet;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
/**
* Uses the XML Parser transformation in a mapping.
*
*/
public class XMLParserTransformationSample extends Base {
// ////////////////////////////////////////////////////////////////
// instance variables
// ////////////////////////////////////////////////////////////////
protected Mapping mapping = null;
protected Source jobSourceObj = null;
protected Target targetObj = null;
/*
* Creates a source.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources() {
jobSourceObj = createOracleJobSource("Oracle_Source");
folder.addSource(jobSourceObj);
}
/*
* Creates a mapping.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
protected void createMappings() throws Exception {
// create a mapping object
mapping = new Mapping("XMLParserTransformation", "XMLParserTransformation",
"Mapping for FDA on JOBS table");
setMapFileName(mapping);
TransformHelper helper = new TransformHelper(mapping);
// create DSQ
RowSet dsqRowSet = (RowSet)
helper.sourceQualifier(this.jobSourceObj).getRowSets().get(0); //only one field is
given as input.
mapping.writeTarget(outputTarget, this.targetObj);
folder.addMapping(mapping);
}
/*
* Creates a session to run the mapping.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSession()
*/
protected void createSession() throws Exception {
session = new Session("Session_For_CustomACTIVE", "Session_For_CustomACTIVE",
"This is session for Custom Transformation");
session.setMapping(mapping);
}
/*
* Creates a workflow.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createWorkflow()
*/
protected void createWorkflow() throws Exception {
workflow = new Workflow("Workflow_for_CustomTransformation",
"Workflow_for_CustomTransformation", "This workflow for Custom Transformation");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
/**
* @param args
*/
public static void main(String[] args) {
try {
XMLParserTransformationSample customActive = new
XMLParserTransformationSample();
if (args.length > 0) {
if (customActive.validateRunMode(args[0])) {
customActive.execute();
}
} else {
customActive.printUsage();
}
} catch (Exception e) {
e.printStackTrace();
System.err.println("Exception is: " + e.getMessage());
}
}
}
@Override
protected void createSources() {
entityViewSrc = new EntityView("C:\\Venkat\\JMF\\JMF\\eBiz\\main\\javamappingsdk\
\xsd_samples\\FIN_STRUCT-ZTT_ALL_STRUCT.xsd", GroupType.OUTPUT,FieldType.SOURCE);
xmlSrc = new XMLSource(entityViewSrc,"XMLSourceEntity", "XMLSourceEntity",
"XMLSourceEntity", "XMLSourceEntity", new
ConnectionInfo(SourceTargetType.XML));
xmlSrc.createSourceFields();
folder.addSource(xmlSrc);
}
@Override
protected void createTargets() {
entityViewTgt = new EntityView("C:\\Venkat\\JMF\\JMF\\eBiz\\main\\javamappingsdk\
\xsd_samples\\FIN_STRUCT-ZTT_ALL_STRUCT.xsd", GroupType.INPUT,FieldType.TARGET);
xmlTgt = new XMLTarget(entityViewTgt, "XMLTargetEntity", "XMLTargetEntity",
"XMLTargetEntity", "XMLTargetEntity", new
ConnectionInfo(SourceTargetType.XML));
xmlTgt.createTargetFields();
folder.addTarget(xmlTgt);
}
@Override
protected void createMappings() throws Exception {
// create a mapping object
mapping = new Mapping("XMLSourceTargetEntityViewMapping",
"XMLSourceTargetEntityViewMapping",
"XML Source Target Entity Mapping");
setMapFileName(mapping);
TransformHelper helper = new TransformHelper(mapping);
// create DSQ
RowSet dsqRowSet = (RowSet) helper.sourceQualifier(xmlSrc).getRowSets().get(0);
mapping.writeTarget(dsqRowSet, this.xmlTgt);
folder.addMapping(mapping);
}
@Override
protected void createSession() throws Exception {
session = new Session("Session_For_XMLSourceTargetEntityView",
"Session_For_XMLSourceTargetEntityView", This is session for
XML Source / Target Entity");
session.setMapping(mapping);
}
@Override
protected void createWorkflow() throws Exception {
workflow = new Workflow("Workflow_for_XMLSourceTargetEntityView",
"Workflow_for_XMLSourceTargetEntityView", "This workflow for
XML Source / Target Entity");
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.GroupType;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.OutputSet;
import com.informatica.powercenter.sdk.mapfwk.core.PortType;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.SourceGroup;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TargetGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Transformation;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
import com.informatica.powercenter.sdk.mapfwk.core.xml.NormalizedView;
import com.informatica.powercenter.sdk.mapfwk.core.xml.XMLSource;
import com.informatica.powercenter.sdk.mapfwk.core.xml.XMLTarget;
import com.informatica.powercenter.sdk.mapfwk.core.xml.XMLView;
@Override
protected void createSources() {
normalizedViewSrc = new NormalizedView("C:\\Users\\vthenapp\\Desktop\\Tasks\\XML
Sources\\uw_original.xsd", GroupType.OUTPUT,FieldType.SOURCE);
xmlSrc = new XMLSource(normalizedViewSrc,"XMLSource", "XMLSource", "XMLSource",
"XMLSource", new ConnectionInfo(SourceTargetType.XML));
xmlSrc.createSourceFields();
folder.addSource(xmlSrc);
}
@Override
@Override
protected void createMappings() throws Exception {
// create a mapping object
mapping = new Mapping("XMLSourceTargetMapping", "XMLSourceTargetMapping",
"XML Source Target Mapping");
setMapFileName(mapping);
TransformHelper helper = new TransformHelper(mapping);
// create DSQ
RowSet dsqRowSet = (RowSet) helper.sourceQualifier(xmlSrc).getRowSets().get(0);
mapping.writeTarget(dsqRowSet, this.xmlTgt);
folder.addMapping(mapping);
}
@Override
protected void createSession() throws Exception {
session = new Session("Session_For_XMLSourceTarget",
"Session_For_XMLSourceTarget",
"This is session for XML Source / Target");
session.setMapping(mapping);
}
@Override
protected void createWorkflow() throws Exception {
workflow = new Workflow("Workflow_for_XMLSourceTarget",
"Workflow_for_XMLSourceTarget",
"This workflow for XML Source / Target");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
import java.io.IOException;
import java.io.InputStream;
import java.util.List;
import java.util.Properties;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.Folder;
if ( propStream != null ) {
properties.load( propStream );
rep.getRepoConnectionInfo().setPcClientInstallPath(properties.getProperty(RepoPropsConsta
nts.PC_CLIENT_INSTALL_PATH));
rep.getRepoConnectionInfo().setTargetFolderName(properties.getProperty(RepoPropsConstants
.TARGET_FOLDER_NAME));
rep.getRepoConnectionInfo().setTargetRepoName(properties.getProperty(RepoPropsConstants.T
ARGET_REPO_NAME));
rep.getRepoConnectionInfo().setRepoServerHost(properties.getProperty(RepoPropsConstants.R
EPO_SERVER_HOST));
rep.getRepoConnectionInfo().setAdminUsername(properties.getProperty(RepoPropsConstants.AD
MIN_USERNAME));
rep.getRepoConnectionInfo().setRepoServerPort(properties.getProperty(RepoPropsConstants.R
EPO_SERVER_PORT));
rep.getRepoConnectionInfo().setServerPort(properties.getProperty(RepoPropsConstants.SERVE
R_PORT));
rep.getRepoConnectionInfo().setDatabaseType(properties.getProperty(RepoPropsConstants.DAT
ABASETYPE));
if(properties.getProperty(RepoPropsConstants.PMREP_CACHE_FOLDER) != null)
rep.getRepoConnectionInfo().setPmrepCacheFolder(properties.getProperty(RepoPropsConstants
.PMREP_CACHE_FOLDER));
}
else {
throw new IOException( "pcconfig.properties file not found.");
}
}
@Override
protected void createSources() {
List<Source> listOfSources = null;
try {
for (int i = 0; i < folderSize; i++) {
//fetch XML source
listOfSources = ((Folder) folders.get(i))
.fetchSourcesFromRepository(new INameFilter() {
@Override
public boolean accept(String name) {
return name.equals("XMLSourceEntity");
}
});
}
xmlSrc = (XMLSource) listOfSources.get(0);
xmlSrc.setName("FetchedXMLSource");
xmlSrc.setBusinessName("FetchedXMLSource");
xmlSrc.setInstanceName("FetchedXMLSource");
xmlSrc.setDescription("FetchedXMLSource");
xmlSrc.setModified(true);
folder.addSource(xmlSrc);
} catch (RepoOperationException e) {
e.printStackTrace();
} catch (MapFwkReaderException e) {
e.printStackTrace();
}
}
@Override
protected void createTargets() {
List<Target> listOfTargets = null;
try {
for (int i = 0; i < folderSize; i++) {
//fetch XML target
listOfTargets = ((Folder) folders.get(i))
.fetchTargetsFromRepository(new INameFilter() {
@Override
public boolean accept(String name) {
return name.equals("XMLTargetEntity");
}
});
}
xmlTgt = (XMLTarget) listOfTargets.get(0);
xmlTgt.setName("FetchedXMLTarget");
xmlTgt.setBusinessName("FetchedXMLTarget");
xmlTgt.setInstanceName("FetchedXMLTarget");
@Override
protected void createMappings() throws Exception {
// create a mapping object
mapping = new Mapping("FetchedXMLSourceTargetMapping",
"FetchedXMLSourceTargetMapping", "Fetched XML Source Target Mapping");
setMapFileName(mapping);
// create DSQ
RowSet dsqRowSet = (RowSet) helper.sourceQualifier(xmlSrc).getRowSets().get(0);
mapping.writeTarget(dsqRowSet, this.xmlTgt);
folder.addMapping(mapping);
@Override
protected void createSession() throws Exception {
session = new Session("Session_For_FetchedXMLSourceTargetMapping",
"Session_For_FetchedXMLSourceTargetMapping",
"This is session for Fetched XML Source / Target");
session.setMapping(mapping);
}
@Override
protected void createWorkflow() throws Exception {
workflow = new Workflow("Workflow_for_FetchedXMLSourceTarget",
"Workflow_for_FetchedXMLSourceTarget",
"This workflow for Fetched XML Source / Target ");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.core.ASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.InputSet;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.MultiGroupASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.OutputSet;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.SourceGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TargetGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TransformGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Transformation;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationConstants;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationContext;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
import com.informatica.powercenter.sdk.mapfwk.exception.InvalidTransformationException;
import com.informatica.powercenter.sdk.mapfwk.exception.MapFwkException;
import com.informatica.powercenter.sdk.mapfwk.exception.RepoOperationException;
import com.informatica.powercenter.sdk.mapfwk.plugin.MainframeConPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectAccessMethodConstants;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectConInfo;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectMetaExtentionConstants;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectSource;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectSourceFactory;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectSourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectTarget;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectTargetFactory;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerExchangeStringConstants;
import
com.informatica.powercenter.sdk.mapfwk.powercentercompatibility.PowerCenterCompatibilityF
actory;
/**
*
* When you run this sample code, set the mapFileName object to absolute location.
* Otherwise, the example will fail to import an object into the repository.
*
*
*/
public class PWXAdabasSample extends Base{
PowerConnectSource source;
PowerConnectTarget outputTarget;
TransformHelper helper;
SourceGroup srcGrp;
TargetGroup trgGrp;
ASQTransformation ASQ_TRANS;
Transformation filter;
/* Creates a mapping.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
/*
* Creates a filter transformation on a PowerExchange target.
*/
TransformationContext trans = new TransformationContext(inSets);
filter = trans.createTransform(TransformationConstants.FILTER_PROC,
"PWX_FILTER_TRANS");
RowSet filterRS = (RowSet) filter.apply().getRowSets().get(0);
mapping.addTransformation(filter);
mapping.writeTarget(filterRS, outputTarget);
folder.addMapping(mapping);
/*
* Method 2 : This creates a source qualifier without using the
TransformationHelper.
* If you use this method to create a source qualifier, enable this code
* and comment out the code in Method 1.
*/
ASQ_TRANS =
(MultiGroupASQTransformation)tc.createTransform(TransformationConstants.MULTI_GROUP_ASQ,
"AMGDSQ_r3kperf_R3KROOT");
RowSet asqRS = (RowSet) ASQ_TRANS.apply().getRowSets().get(0);
ASQ_TRANS.setTransformGroups(listTransGroup);
((MultiGroupASQTransformation)ASQ_TRANS).setRefDBDName(source.getConnInfo().getConnProps(
).getProperty(ConnectionPropsConstants.DBNAME));
((MultiGroupASQTransformation)ASQ_TRANS).setRefSourceName(source.getName());
mapping.addTransformation(ASQ_TRANS);
mapping.writeTarget(asqRS, outputTarget);
folder.addMapping(mapping); */
/*
* Sets the session transformation instance properties PowerExchange sources.
*/
Properties props = new Properties();
props.setProperty(MainframeConPropsConstants.ADABAS.TRACING_LEVEL,
PowerExchangeStringConstants.TERSE);
props.setProperty(MainframeConPropsConstants.ADABAS.OUTPUT_IS_DETERMINISTIC,
PowerExchangeStringConstants.NO);
/*
* If you use method 1 in createMappings(), use the following code.
*/
session.addSessionTransformInstanceProperties(mapping.getTransformations().get(0),
props);
/*
* If you use method 2 in createMappings(), enable the following code to set
session
* attributes and comment the previous code.
*/
//session.addSessionTransformInstanceProperties(ASQ_TRANS, props);
session.addSessionTransformInstanceProperties(filter, null);
session.setMapping(this.mapping);
}
/* Creates a source.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources()
{
PowerConnectSourceFactory sourceFactory =
PowerConnectSourceFactory.getInstance();
try
{
source =
sourceFactory.getPowerConnectSourceInstance(PowerConnectSourceTargetType.ADABAS,
"PWX_SRC_ADABAS", "mySourceDBD", "mySource", "r3kperf_R3KROOT");
srcGrp = new SourceGroup("r3kperf_R3KROOT",(String)null);
source.createField("CCK_R3KROOT_COUNTER",srcGrp,"","","NUM32","10","0",FieldKeyType.PRIMA
RY_KEY, FieldType.SOURCE, true);
source.createField("COUNTER",srcGrp,"","","NUM32","10","0",FieldKeyType.PRIMARY_KEY,
FieldType.SOURCE, true);
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.ADABAS);
//source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.DBD_NAME,
"R3KPERFS");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_NAME ,
"r3kperf_R3KROOT");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_PREFIX,
"perform");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.COMMENT_TEXT, "");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME, "jmf_conn");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"IMS_BGQALS91_JMF");
/*
* Sets the session extension properties for a PowerExchange source.
*/
Properties connAttributes = new Properties();
connAttributes.setProperty(MainframeConPropsConstants.ADABAS.ADABAS_PASSWORD, "pass");
connInfo.setCustSessionExtAttr(connAttributes);
}
/* (non-Javadoc)
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createTargets()
*/
protected void createTargets()
{
PowerConnectTargetFactory targetFactory =
PowerConnectTargetFactory.getInstance();
try
{
outputTarget =
targetFactory.getPowerConnectTargetInstance(PowerConnectSourceTargetType.ADABAS,
"PWX_TGT_ADABAS", "myTargetDBD", "myTargetDBD", "r3kperf_R3KROOT");
trgGrp = new TargetGroup("perform",(String)null);
outputTarget.createField("CCK_R3KROOT_COUNTER",trgGrp,"","","NUM32","10","0",FieldKeyType
.PRIMARY_KEY, FieldType.TARGET, true);
outputTarget.createField("COUNTER",trgGrp,"","","NUM32","10","0",FieldKeyType.PRIMARY_KEY
, FieldType.TARGET, true);
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.ADABAS);
//
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.DBD_NAME,
"R3KPERFS");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_NAME ,
"r3kperf_R3KROOT");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_PREFIX
, "perform");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.COMMENT_TEXT, "");
List<ConnectionInfo> connInfos = outputTarget.getConnInfos();
for (int i=0;i<connInfos.size();i++)
{
PowerConnectConInfo connInfo = (PowerConnectConInfo) connInfos.get(i);
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME, "jmf_conn");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"IMS_BGQALS91_JMF");
/*
* Sets the session extension properties for a
PowerExchange target.
connAttributes.setProperty(MainframeConPropsConstants.ADABAS.ADABAS_PASSWORD,
"dfdsfsdf");
connInfo.setCustSessionExtAttr(connAttributes);
}
} catch (RepoOperationException e){
e.printStackTrace();
} catch (MapFwkException e){
e.printStackTrace();
}
folder.addTarget(outputTarget);
}
/* Creates a workflow.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createWorkflow()
*/
protected void createWorkflow() throws Exception
{
workflow = new Workflow("Workflow_For_PWX_ADABAS",
"Workflow_For_PWX_ADABAS", "This workflow for PowerExcange Adabas");
/*
* Sets the repository and domain information for the Integration Service.
*/
workflow.assignIntegrationService("repo_IS", "Domain_IN173082");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
Datacom
The following sample code shows how to connect to a Datacom data source:
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.core.ASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
/**
*
* When you run this sample code, set the mapFileName object to absolute location.
* Otherwise, the example will fail to import an object into the repository.
*
*
*/
public class PWXDatacomSample extends Base{
PowerConnectSource source;
PowerConnectTarget outputTarget;
TransformHelper helper;
SourceGroup srcGrp;
TargetGroup trgGrp;
ASQTransformation ASQ_TRANS;
Transformation filter;
/* Creates mappings.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
/*
* Creates a filter transformation on a PowerExchange target.
*/
TransformationContext trans = new TransformationContext(inSets);
folder.addMapping(mapping);
/*
* Method 2 : This creates a source qualifier without using the
TransformationHelper.
* If you use this method to create a source qualifier, enable this code
* and comment out the code in Method 1.
*/
ASQ_TRANS =
(MultiGroupASQTransformation)tc.createTransform(TransformationConstants.MULTI_GROUP_ASQ,
"AMGDSQ_r3kperf_R3KROOT");
RowSet asqRS = (RowSet) ASQ_TRANS.apply().getRowSets().get(0);
ASQ_TRANS.setTransformGroups(listTransGroup);
((MultiGroupASQTransformation)ASQ_TRANS).setRefDBDName(source.getConnInfo().getConnProps(
).getProperty(ConnectionPropsConstants.DBNAME));
((MultiGroupASQTransformation)ASQ_TRANS).setRefSourceName(source.getName());
mapping.addTransformation(ASQ_TRANS);
mapping.writeTarget(asqRS, outputTarget);
folder.addMapping(mapping); */
/*
* Sets the session transformation instance properties for PowerExchange
sources.
*/
Properties props = new Properties();
props.setProperty(MainframeConPropsConstants.IDMS.TRACING_LEVEL,
PowerExchangeStringConstants.TERSE);
props.setProperty(MainframeConPropsConstants.IDMS.OUTPUT_IS_DETERMINISTIC,
PowerExchangeStringConstants.NO);
session.addSessionTransformInstanceProperties(mapping.getTransformations().get(0),
props);
/*
* If you use method 2 in createMappings(), enable the following code to set
session
* attributes and comment the previous code.
*/
//session.addSessionTransformInstanceProperties(ASQ_TRANS, props);
session.addSessionTransformInstanceProperties(filter, null);
session.setMapping(this.mapping);
}
/* Creates a source.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources()
{
PowerConnectSourceFactory sourceFactory =
PowerConnectSourceFactory.getInstance();
try
{
source =
sourceFactory.getPowerConnectSourceInstance(PowerConnectSourceTargetType.DATACOM,
"PWX_SRC_DATACOM", "DATACOM_mySourceDBD", "DATACOM_mySource",
"DATACOM_r3kperf_R3KROOT_DC");
srcGrp = new SourceGroup("r3kperf_R3KROOT",(String)null);
source.createField("CCK_R3KROOT_COUNTER",srcGrp,"","","NUM32","10","0",FieldKeyType.PRIMA
RY_KEY, FieldType.SOURCE, true);
source.createField("COUNTER",srcGrp,"","","NUM32","10","0",FieldKeyType.PRIMARY_KEY,
FieldType.SOURCE, true);
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.DATACOM);
//source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.DBD_NAME,
"R3KPERFS");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_NAME ,
"r3kperf_R3KROOT");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_PREFIX,
"perform");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.COMMENT_TEXT, "");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME, "jmf_conn");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"IMS_BGQALS91_JMF");
/*
* Sets the session extension properties for a PowerExchange source.
*/
Properties connAttributes = new Properties();
//
connAttributes.setProperty(MainframeConPropsConstants.ADABAS.ADABAS_PASSWORD, "pass");
connInfo.setCustSessionExtAttr(connAttributes);
}
/* Creates a target.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createTargets()
*/
protected void createTargets()
{
PowerConnectTargetFactory targetFactory =
PowerConnectTargetFactory.getInstance();
try
{
outputTarget =
targetFactory.getPowerConnectTargetInstance(PowerConnectSourceTargetType.ADABAS,
"PWX_TGT_ADABAS", "myTargetDBD", "myTargetDBD", "r3kperf_R3KROOT");
trgGrp = new TargetGroup("perform",(String)null);
outputTarget.createField("CCK_R3KROOT_COUNTER",trgGrp,"","","NUM32","10","0",FieldKeyType
.PRIMARY_KEY, FieldType.TARGET, true);
outputTarget.createField("COUNTER",trgGrp,"","","NUM32","10","0",FieldKeyType.PRIMARY_KEY
, FieldType.TARGET, true);
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.ADABAS);
//
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.DBD_NAME,
"R3KPERFS");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_NAME ,
"r3kperf_R3KROOT");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_PREFIX
, "perform");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.COMMENT_TEXT, "");
List<ConnectionInfo> connInfos = outputTarget.getConnInfos();
for (int i=0;i<connInfos.size();i++)
{
PowerConnectConInfo connInfo = (PowerConnectConInfo) connInfos.get(i);
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME, "jmf_conn");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"IMS_BGQALS91_JMF");
/*
* Sets the session extension properties for a
PowerExchange target.
*/
Properties connAttributes = new Properties();
//
connAttributes.setProperty(MainframeConPropsConstants.ADABAS.ADABAS_PASSWORD,
"dfdsfsdf");
connInfo.setCustSessionExtAttr(connAttributes);
}
} catch (RepoOperationException e){
e.printStackTrace();
} catch (MapFwkException e){
e.printStackTrace();
}
folder.addTarget(outputTarget);
}
/*
* Sets the repository and domain information for the Integration Service.
*/
workflow.assignIntegrationService("repo_IS", "Domain_IN173082");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
IDMS
The following sample code shows how to connect to an IDMS data source:
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.core.ASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.InputSet;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.MultiGroupASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.OutputSet;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.SourceGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TargetGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TransformGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Transformation;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationConstants;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationContext;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
/**
*
* When you run this sample code, set the mapFileName object to absolute location.
* Otherwise, the example will fail to import an object into the repository.
*
*/
public class PWXIDMSSample extends Base{
PowerConnectSource source;
PowerConnectTarget outputTarget;
TransformHelper helper;
SourceGroup srcGrp;
TargetGroup trgGrp;
ASQTransformation ASQ_TRANS;
Transformation filter;
/* Creates a mapping.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
/*
* Creates a filter transformation on a PowerExchange target.
*/
TransformationContext trans = new TransformationContext(inSets);
filter = trans.createTransform(TransformationConstants.FILTER_PROC,
"PWX_FILTER_TRANS");
RowSet filterRS = (RowSet) filter.apply().getRowSets().get(0);
mapping.addTransformation(filter);
mapping.writeTarget(filterRS, outputTarget);
folder.addMapping(mapping);
/*
* Method 2 : This creates a source qualifier without using the
TransformationHelper.
* If you use this method to create a source qualifier, enable this code
* and comment out the code in Method 1.
*/
ASQ_TRANS =
(MultiGroupASQTransformation)tc.createTransform(TransformationConstants.MULTI_GROUP_ASQ,
"AMGDSQ_r3kperf_R3KROOT");
RowSet asqRS = (RowSet) ASQ_TRANS.apply().getRowSets().get(0);
ASQ_TRANS.setTransformGroups(listTransGroup);
((MultiGroupASQTransformation)ASQ_TRANS).setRefDBDName(source.getConnInfo().getConnProps(
).getProperty(ConnectionPropsConstants.DBNAME));
((MultiGroupASQTransformation)ASQ_TRANS).setRefSourceName(source.getName());
mapping.addTransformation(ASQ_TRANS);
mapping.writeTarget(asqRS, outputTarget);
folder.addMapping(mapping); */
/*
* Sets the session transformation instance properties for PowerExchange
sources.
*/
Properties props = new Properties();
props.setProperty(MainframeConPropsConstants.IDMS.TRACING_LEVEL,
PowerExchangeStringConstants.TERSE);
props.setProperty(MainframeConPropsConstants.IDMS.OUTPUT_IS_DETERMINISTIC,
PowerExchangeStringConstants.NO);
/*
* If you use method 1 in createMappings(), use the following code.
*/
session.addSessionTransformInstanceProperties(mapping.getTransformations().get(0),
props);
/*
* If you use method 2 in createMappings(), enable the following code to set
session
* attributes and comment out the previous code.
*/
//session.addSessionTransformInstanceProperties(ASQ_TRANS, props);
session.addSessionTransformInstanceProperties(filter, null);
/* Creates a source.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources()
{
PowerConnectSourceFactory sourceFactory =
PowerConnectSourceFactory.getInstance();
try
{
source =
sourceFactory.getPowerConnectSourceInstance(PowerConnectSourceTargetType.IDMS,
"PWX_SRC_IDMS", "IDMS_mySourceDBD", "IDMS_mySource", "IDMS_r3kperf_R3KROOT_IDMS");
srcGrp = new SourceGroup("r3kperf_R3KROOT",(String)null);
source.createField("CCK_R3KROOT_COUNTER",srcGrp,"","","NUM32","10","0",FieldKeyType.PRIMA
RY_KEY, FieldType.SOURCE, true);
source.createField("COUNTER",srcGrp,"","","NUM32","10","0",FieldKeyType.PRIMARY_KEY,
FieldType.SOURCE, true);
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.IDMS);
//source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.DBD_NAME,
"R3KPERFS");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_NAME ,
"r3kperf_R3KROOT");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_PREFIX,
"perform");
source.setMetaExtensionValue(PowerConnectMetaExtentionConstants.COMMENT_TEXT, "");
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME, "jmf_conn");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"IMS_BGQALS91_JMF");
/*
* Sets the session extension properties for a PowerExchange source.
*/
Properties connAttributes = new Properties();
//
connAttributes.setProperty(MainframeConPropsConstants.ADABAS.ADABAS_PASSWORD, "pass");
connInfo.setCustSessionExtAttr(connAttributes);
}
/* Creates a target.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createTargets()
*/
protected void createTargets()
{
PowerConnectTargetFactory targetFactory =
outputTarget.createField("CCK_R3KROOT_COUNTER",trgGrp,"","","NUM32","10","0",FieldKeyType
.PRIMARY_KEY, FieldType.TARGET, true);
outputTarget.createField("COUNTER",trgGrp,"","","NUM32","10","0",FieldKeyType.PRIMARY_KEY
, FieldType.TARGET, true);
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.ADABAS);
//
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.DBD_NAME,
"R3KPERFS");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_NAME ,
"r3kperf_R3KROOT");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.MAP_OR_TABLE_PREFIX
, "perform");
outputTarget.setMetaExtensionValue(PowerConnectMetaExtentionConstants.COMMENT_TEXT, "");
List<ConnectionInfo> connInfos = outputTarget.getConnInfos();
for (int i=0;i<connInfos.size();i++)
{
PowerConnectConInfo connInfo = (PowerConnectConInfo) connInfos.get(i);
connInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME, "jmf_conn");
connInfo.getConnProps().setProperty( ConnectionPropsConstants.DBNAME,"IMS_BGQALS91_JMF");
/*
* Sets the session extension properties for a
PowerExchange target.
*/
Properties connAttributes = new Properties();
//
connAttributes.setProperty(MainframeConPropsConstants.ADABAS.ADABAS_PASSWORD,
"dfdsfsdf");
connInfo.setCustSessionExtAttr(connAttributes);
}
} catch (RepoOperationException e){
e.printStackTrace();
} catch (MapFwkException e){
e.printStackTrace();
}
folder.addTarget(outputTarget);
}
/* Creates a workflow.
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createWorkflow()
*/
protected void createWorkflow() throws Exception
{
workflow = new Workflow("Workflow_For_PWX_IDMS",
"Workflow_For_PWX_IDMS", "This workflow for PowerExcange IDMS");
/*
* This is how integration service repo and domain information can be set.
*/
workflow.assignIntegrationService("repo_IS", "Domain_IN173082");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
The following sample code shows how to use the PowerExchange SEQ access method:
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.ASQTransformation;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.InputSet;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.SourceGroup;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.TargetGroup;
import com.informatica.powercenter.sdk.mapfwk.core.TransformHelper;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
import com.informatica.powercenter.sdk.mapfwk.exception.MapFwkException;
import com.informatica.powercenter.sdk.mapfwk.exception.RepoOperationException;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectAccessMethodConstants;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectConInfo;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectMetaExtentionConstants;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectSource;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectSourceFactory;
import com.informatica.powercenter.sdk.mapfwk.plugin.PowerConnectSourceTargetType;
import
com.informatica.powercenter.sdk.mapfwk.portpropagation.PortPropagationContextFactory;
import
com.informatica.powercenter.sdk.mapfwk.powercentercompatibility.PowerCenterCompatibilityF
actory;
/**
*
* When you run this sample code, set the mapFileName object to absolute location.
* Otherwise, the sample will fail to import an object into the repository.
*
*
PowerConnectSource source;
Target outputTarget;
TransformHelper helper;
SourceGroup srcGrp;
TargetGroup trgGrp;
ASQTransformation ASQ_TRANS;
/*
* Creates a mapping.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createMappings()
*/
folder.addMapping(mapping);
}
/*
* Creates a session to run the mapping.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSession()
*/
protected void createSession() throws Exception {
session = new Session("Session_For_PWX_SEQ", "Session_For_PWX_SEQ",
"This is session for PowerExcange SEQ Files");
/*
* If you use method 1 in createMappings(), use the following code.
*/
session.addSessionTransformInstanceProperties(mapping
.getTransformations().get(0), props);
/*
* When you follow method 2 in createMappings(), enable the following code
* to set session attributes and comment the above code.
*/
// session.addSessionTransformInstanceProperties(ASQ_TRANS, props);
session.setMapping(this.mapping);
}
/*
* Creates a source.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createSources()
*/
protected void createSources() {
PowerConnectSourceFactory sourceFactory = PowerConnectSourceFactory
.getInstance();
try {
source.setMetaExtensionValue(
PowerConnectMetaExtentionConstants.ACCESS_METHOD,
PowerConnectAccessMethodConstants.SEQ);
source.setMetaExtensionValue(
PowerConnectMetaExtentionConstants.FILE_NAME,
"C:\\Program Files\\Informatica\\Informatica PowerExchange\\examples\
\train3.dat");
source.setMetaExtensionValue(
PowerConnectMetaExtentionConstants.MAP_NAME,
"datamap_NAME_REC");
source.setMetaExtensionValue(
PowerConnectMetaExtentionConstants.SCHEMA_NAME, "traiu3");
} catch (RepoOperationException e) {
e.printStackTrace();
} catch (MapFwkException e) {
e.printStackTrace();
}
folder.addSource(source);
this.mapFileName = "C:\\PowerExchangeSource_SEQ.xml";
}
/*
* Creates a target.
*
* @see com.informatica.powercenter.sdk.mapfwk.samples.Base#createTargets()
*/
protected void createTargets() {
ConnectionInfo conInfo = new ConnectionInfo(SourceTargetType.Flat_File);
outputTarget = new Target("rc_file", "", "", "rec_file", conInfo);
outputTarget.addField(new Field("NAME", "", "", "string", "20", "0",
FieldKeyType.NOT_A_KEY, FieldType.TARGET, false));
folder.addTarget(outputTarget);
}
/*
* Creates a workflow.
/*
* Sets the Integration Service and domain information in the workflow.
*/
workflow.assignIntegrationService("Integration_Service", "Domain_vm");
workflow.addSession(session);
folder.addWorkFlow(workflow);
}
package com.informatica.powercenter.sdk.mapfwk.samples;
import java.util.ArrayList;
import java.util.List;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionInfo;
import com.informatica.powercenter.sdk.mapfwk.connection.ConnectionPropsConstants;
import com.informatica.powercenter.sdk.mapfwk.connection.SourceTargetType;
import com.informatica.powercenter.sdk.mapfwk.core.Field;
import com.informatica.powercenter.sdk.mapfwk.core.FieldKeyType;
import com.informatica.powercenter.sdk.mapfwk.core.FieldType;
import com.informatica.powercenter.sdk.mapfwk.core.InputSet;
import com.informatica.powercenter.sdk.mapfwk.core.Mapping;
import com.informatica.powercenter.sdk.mapfwk.core.NativeDataTypes;
import com.informatica.powercenter.sdk.mapfwk.core.RowSet;
import com.informatica.powercenter.sdk.mapfwk.core.Session;
import com.informatica.powercenter.sdk.mapfwk.core.Source;
import com.informatica.powercenter.sdk.mapfwk.core.Target;
import com.informatica.powercenter.sdk.mapfwk.core.Transformation;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationConstants;
import com.informatica.powercenter.sdk.mapfwk.core.TransformationContext;
import com.informatica.powercenter.sdk.mapfwk.core.Workflow;
public TeradataMLoadSample(){
target = null;
source = null;
dsqTransform = null;
}
@Override
protected void createSources() {
this.source = this.createOracleJobSource("mloadSource");
this.source.setSessionTransformInstanceProperty(ConnectionPropsConstants.OWNER_NAME,
"DSFds");
this.folder.addSource(this.source);
}
@Override
protected void createTargets() {
List<Field> fields = new ArrayList<Field>();
Field jobIDField = new Field( "JOB_ID1", "JOB_ID", "",
NativeDataTypes.Teradata.VARCHAR, "10", "0",FieldKeyType.PRIMARY_KEY,
FieldType.SOURCE, true );
fields.add( jobIDField );
target.setSessionTransformInstanceProperty(ConnectionPropsConstants.TARGET_TABLE_NAME,
"fgfdg");
this.folder.addTarget(this.target);
}
@Override
protected void createMappings() throws Exception {
mapping = new Mapping("Teradata_mload", "Teradata_mload", "This is
Teradata_mload");
setMapFileName(mapping);
List<InputSet> inputSets = new ArrayList<InputSet>();
InputSet tptSource = new InputSet(this.source);
inputSets.add(tptSource);
TransformationContext tc = new TransformationContext( inputSets );
dsqTransform = tc.createTransform( TransformationConstants.DSQ, "TPT_DSQ" );
this.dsqTransform.setSessionTransformInstanceProperty(ConnectionPropsConstants.USER_DEFIN
@Override
protected void createSession() throws Exception {
session = new Session( "session_For_Teradata_mload",
"Session_For_Teradata_mload",
"This is session for Teradata_mload" );
session.setMapping( this.mapping );
tgtconInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME,"mload");
tgtconInfo.getConnProps().setProperty(ConnectionPropsConstants.FOOTER_COMMAND,"Footer
Command");
tgtconInfo.getConnProps().setProperty(ConnectionPropsConstants.FLATFILE_CODEPAGE,"MS1252"
);
session.addConnectionInfoObject(target, tgtconInfo);
conInfo.getConnProps().setProperty(ConnectionPropsConstants.CONNECTIONNAME,"Oracle");
conInfo.getConnProps().setProperty(ConnectionPropsConstants.DRIVER_TRACING_LEVEL,"TD_OPER
_NOTIFY");
conInfo.getConnProps().setProperty(ConnectionPropsConstants.INFRASTRUCTURE_TRACING_LEVEL,
"TD_OPER_NOTIFY");
conInfo.getConnProps().setProperty(ConnectionPropsConstants.TRACE_FILE_NAME,"trace
file");
conInfo.getConnProps().setProperty(ConnectionPropsConstants.QUERY_BAND_EXPRESSION,"quary
band exp");
session.addConnectionInfoObject(dsqTransform, conInfo);
}
@Override
protected void createWorkflow() throws Exception {
workflow = new Workflow( "Workflow_for_Teradata_mload",
"Workflow_for_Teradata_mload",
"This workflow for Teradata_mload" );
workflow.addSession( session );
folder.addWorkFlow( workflow );
}
Pattern Description
[A-Za-z_][A-Za-z_0-9]* (TO) x_$0 Prefix all port names with x_. Examples:
EMPLOYEE_ID => x_EMPLOYEE_ID
EMPLOYEE_NAME => x_EMPLOYEE_NAME
GENDER => x_GENDER
ETHIC_GROUP => x_ETHIC_GROUP
pattern = ^EMP.*_.*$ (TO) $0_IN Select port names that start with EMP and append _IN to the end of the port
name. Examples:
EMPLOYEE_ID => EMPLOYEE_ID_IN
EMPLOYEE_NAME => EMPLOYEE_NAME_IN
GENDER is not selected.
ETHIC_GROUP is not selected.
pattern = [\d]$ (TO) $0 Select port names that end with a digit and keep the name as is. Examples:
EMPLOYEE_ID is not selected.
EMPLOYEE_NAME1 => EMPLOYEE_NAME1
GENDER35 => GENDER35
ETHIC_GROUP is not selected.
_IN$ Select port names with the suffix _IN and remove the suffix from the port
names. Examples:
EMPLOYEE_ID is not selected.
EMPLOYEE_NAME is not selected.
GENDER_IN => GENDER
ETHIC_GROUP_IN => ETHIC_GROUP
^IN_ Select port names that with the prefix IN_ and remove the prefix from the port
names. Examples:
IN_EMPLOYEE_ID => EMPLOYEE_ID
IN_EMPLOYEE_NAME => EMPLOYEE_NAME
GENDER is not selected.
IN_ETHIC_GROUP => ETHIC_GROUP
Interface Limitations
This appendix includes the following topics:
Design API
The Design API has the following limitations:
• Data source support. The following are the Design API data source limitations:
- You cannot create XML metadata from the following files types: XML, DTD, relational tables, and flat
files.
- You cannot create an XML source or XML target definition from an XML schema definition with a
hierarchical denormalized view.
• Transformations. You cannot use the Design API to generate transformation metadata for the following
transformations:
- Unstructured Data
- Data Quality
- Java
PowerExchange API
The PowerExchange API has the following limitations:
179
• You can generate queries for reader session attributes such as SQL Query, User Defined Join, Pre SQL,
Post SQL, and Source Filter. However, you cannot validate the queries.
• There are no setter methods for the Date and Time/TimeStamp datatypes in the IOutputBuffer interface.
To work with these datatypes, convert them to long.
• There are no setter methods for the CLOB, BLOB, and XML datatypes in the IOutputBuffer interface. To
work with the CLOB datatype, convert it to string. To work with the BLOB datatype, convert it to byte[].
• By default, the PowerExchange API readers are also created as application type readers for relational data
sources. When you run a session, you cannot switch a JDBC source or target to a relational reader or
writer.
• The PowerExchange API does not expose the Source Rows As session property. The Java DB adapter
works around this limitation by getting the default value for the Source Rows As session property from the
IUtilSrv. This displays the Source Rows A property and the values Insert, Delete, Update, and Data Driven in
the Workflow Manager. You can also use an Update strategy transformation in a mapping to work around
this limitation.
• The PowerExchange API does not support built-in mapping-level attributes such as SQL overrides and SQL
filter. If you configure mapping-level attributes for the Source Qualifier in the Designer, the values of the
attributes are not visible in the session but are evaluated when a query is prepared.
• You cannot use a PowerExchange API adapter to connect to a Lookup transformation.
• The PowerExchange API does not support data recovery or resiliency.
• The PowerExchange API cannot access the reject file to load rejected rows.
• The PowerExchange API for Java does not implement the ITargetField.getRefPKInfo() method.
• The isDiscardOutput interface for the Debugger is not available in PowerExchange API for Java.
A DBTYPE element
description 32
ALLOWEDCONNECTION element Java DB adapter example 60
description 50 DBTYPETOEXTENSION element
ALLOWEDDBTYPE element description 54
description 48 DBTYPETOWIDGETATTR element
ALLOWEDTEMPLATE element description 41
description 49 Java DB adapter example 62
API Design API
Informatica Development Platform 15 connection objects, creating 100
ATTRIBUTE element data flow 94
description 46 description 20, 90
ATTRIBUTECATEGORY element exporting 101
description 48 folder object, creating 92
importing 101
linking ports 95
C mappings, creating 94
multi-group transformations 94
CLASS element objects, creating 92
description 48 port link context object 95
client plug-in port propagation context object 95
registering 25 propagating ports 95
Client plug-in regular expression, sample pattern 178
example, pmjavadbclient.dll 59 repository object, creating 92
command line programs repository, browsing 90
description 16 sessions, creating 99
interface to PowerCenter 15 single-group transformations 94
commit interval source, creating 93
Java DB adapter example 73 target, creating 93
commit type transformation inputs 94
Java DB adapter example 73 transformation outputs 94
compiler transformations, creating 97
selecting for development 26 usage 20
CONNECTION element workflows, creating 99
description 53 Design API sample code
Java DB adapter example 66 Adabas, connecting 158
CONNECTIONREFERENCE element Aggregator transformation, creating 141
description 49 Data Masking transformation, creating 141
CONNECTIONTOEXTENSION element Datacom, connecting 162
description 55 DB2 data source, connecting 128
Java DB adapter example 67 Email Data Masking transformation, creating 142
Custom Function API Expression transformation, creating 143
description 20 Filter transformation, creating 144
usage 21 IDMS, connecting 167
mapplets, using 121
Netezza data source, using 128
181
Design API sample code (continued)
SAP table, writing 124 K
Sequence Generator transformation, creating 144 KEYTYPE element
sequential data, connecting 172 description 38
shortcuts, using 122
Sorter transformation, creating 144
source and targets, using 127
source qualifier, using 139 L
sources, creating 127 LIBRARY element
SQL transformation, creating 144 description 43
Stored Procedure transformation, creating 145
Transaction Control transformation, creating 145
transformations, creating 141
unconnected Lookup transformation, creating 146 M
Union transformation, creating 146 MEDEFINITION element
Update Strategy transformation, creating 147 description 57
XML entity view, creating 151 MEDOMAIN element
XML Generator transformation, creating 147 description 56
XML hierarchical view, creating 153 Java DB adapter example 68
XML Parser transformation, creating 149 metadata
XML source, access 154 plug-in, defining for 24
XML target, access 154 plug-in, registering client 25
development environment plug-in, registering server 24
setting up 25 MULTIVALUEATTRIBUTE element
DLLs (dynamic linked libraries) description 43
compiling C++ on Windows 26
path, defining 25
O
E Operations API
description 19
error handling usage 19
Java DB adapter example 73
EXTENSION element
description 44
Java DB adapter example 63 P
partitions
Java DB adapter example 73
F path
to DLLs 25
FIELDATTR element plug-in definition file
description 41 example, pmJDBC.xml 59
plug-in metadata
ALLOWEDCONNECTION element 50
H ALLOWEDDBTYPE element 48
ALLOWEDTEMPLATE element 49
HIDDENCONNECTIONATTRIBUTETOEXTENSION element ATTRIBUTE element 46
description 52 ATTRIBUTECATEGORY element 48
HIDDENEXTENSIONATTRIBUTETOCONNECTION element CLASS element 48
description 56 CONNECTION element 53
CONNECTIONREFERENCE element 49
CONNECTIONTOEXTENSION element 55
I DATATYPE element 39
DBSUBTYPE element 36
Informatica Development Platform DBTYPE element 32
Custom Function API 20 DBTYPETOEXTENSION element 54
definition 15 DBTYPETOWIDGETATTR element 41
Design API 20 defining 24
installation 16 EXTENSION element 44
interface to PowerCenter 15 FIELDATTR element 41
Operations API 19 HIDDENCONNECTIONATTRIBUTETOEXTENSION element 52
PowerExchange API 17 HIDDENEXTENSIONATTRIBUTETOCONNECTION element 56
Transformation API 19 KEYTYPE element 38
interfaces LIBRARY element 43
to PowerCenter 15 MEDEFINITION element 57
MEDOMAIN element 56
MULTIVALUEATTRIBUTE element 43
PLUGIN element 31
182 Index
plug-in metadata (continued) regex
registering 24, 25 sample patterns 178
structure 30 repository ID attributes
plug-ins plug-ins, obtaining for 23
building server and client 26
client, registering 25
debugging 27
repository ID attributes, obtaining 23
S
server, registering 24 server plug-in
unregistering 28 example, pmJDBCplugin.jar. 59
PLUGIN element registering 24
description 31 shared library
Java DB adapter example 60 compiling on UNIX 28
plugin.dtd
structure 30
port propagation
regex 178
T
PowerExchange API Transformation API
description 17 description 19
requirements 18 usage 19
usage 18
W
R web services
reader buffer description 16
Java DB adapter example 72 interface to PowerCenter 15
reader extension Windows registry
Java DB adapter example 63 client plug-ins, registering 25
reader extension attributes writer buffer
Java DB adapter example 63 Java DB adapter example 72
reader session writer extension
Java DB adapter example 69 Java DB adapter example 65
REG file writer session
client plug-ins, registering 25 Java DB adapter example 70
Index 183