Documentum Foundation Services 6.7 Development Guide
Documentum Foundation Services 6.7 Development Guide
Development Guide
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Legal Notice
Copyright © 2006-2013 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS
OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Adobe and Adobe PDF
Library are trademarks or registered trademarks of Adobe Systems Inc. in the U.S. and other countries. All other trademarks
used herein are the property of their respective owners.
Documentation Feedback
Your opinion matters. We want to hear from you regarding our product documentation. If you have feedback about how we can
make our documentation better or easier to use, please send us your feedback directly at [email protected].
Table of Contents
Preface ................................................................................................................................ 13
Chapter 1 Overview ..................................................................................................... 15
What are Documentum Foundation Services?..................................................... 15
Web services .................................................................................................... 16
Java services ..................................................................................................... 16
Productivity layer ............................................................................................. 17
DFS and DFC ................................................................................................... 17
XML Support ............................................................................................... 18
DFS tools ......................................................................................................... 19
Enterprise Content Services ............................................................................... 20
Chapter 4 Consuming DFS with the Java DFS Productivity Layer ............................... 43
Local and remote Java consumers ...................................................................... 43
Configuring Java dependencies for DFS productivity-layer consumers ................. 45
Framework-only consumer ........................................................................... 45
Data model classes in service JARs................................................................. 46
Adding dfs-client.xml to the classpath ........................................................... 47
Avoid having dctm.jar on classpath when executing services ........................... 47
Configuring dfc.properties ................................................................................ 47
Configuring service addressing (remote consumers only) .................................... 48
Creating a service context in Java ....................................................................... 49
Setting up service context (Java) .................................................................... 49
Identities ...................................................................................................... 50
Locale .......................................................................................................... 51
Service context runtime properties ................................................................. 51
Transaction support ...................................................................................... 52
Combining USER_TRANSACTION_HINT and PAYLOAD_
PROCESSING_POLICY ................................................................................ 52
Service context registration ........................................................................... 53
Instantiating a service in Java ............................................................................ 54
OperationOptions ............................................................................................. 55
WSDL-first consumption of services .................................................................. 56
List of Figures
List of Tables
This document is a guide to using EMC Documentum Foundation Services (DFS) for the development
of DFS service consumers, and of custom DFS services.
Intended readership
This document is intended for developers and architects building consumers of DFS services, and for
service developers seeking to extend DFS services with custom services. This document will also
be of interest to managers and decision makers seeking to determine whether DFS would offer
value to their organization.
Revision history
The following changes have been made to this document.
Revision history
For additional information, refer to the Javadocs or .NET HTML help files, to the
sample code delivered with the DFS SDK, and to resources on EMC Developer Network
(https://community.emc.com/community/edn/documentum).
This chapter is intended to provide a brief overview of DFS products and technologies. This chapter
covers the following topics:
• What are Documentum Foundation Services?, page 15
• Web services, page 16
• Java services, page 16
• Productivity layer, page 17
• DFS and DFC, page 17
• DFS tools , page 19
• Enterprise Content Services, page 20
Web services
The DFS web services are SOAP/WSDL-based services that can be accessed using a standard WSDL
client such as Axis 2 or the Sun JAX-WS reference implementation, or by using the DFS Java or .NET
productivity layer. The DFS web services are installed as part of Content Server, where they run in
the Java Method Server on the same machine as the Content Server. You can also deploy DFS on a
separate tier or cluster on any supported JEE application server. For details on supported application
servers, refer to the Documentum Foundation Services Release Notes. For deployment instructions, refer
to the Documentum Foundation Services Deployment Guide.
Java services
The DFS Java services are delivered on the DFS SDK as a class library. Like the web services, the
Java services are a service layer over a DFC client, which connects to Documentum Content Server.
The DFS Java services are exposed as Java interfaces and run in the same Java Virtual Machine as the
service consumer. The Java services are optimal for building an application that integrates a UI server
or custom web service layer with DFS on a single tier, rather than consuming DFS services remotely
from multiple client locations. The Java services are in almost all areas functionally identical to the
web services, to the extent that it is possible to build a test consumer that can be switched between
local and remote modes (as in fact is done in the SDK Java consumer samples). This can be useful in
for debugging custom services locally before testing them in a remote deployment. However, there
are significant differences between the Java services and web services in specific areas, such as content
transfer modes and content return types, service context registration, and Kerberos integration.
Productivity layer
The DFS SDK includes Java and .NET client class libraries known as the productivity layer. We want to
emphasize that the productivity layer is not required to consume DFS services, but it does provide
features that may simplify development, such as:
• Convenience methods and constructors that simplify working with collections, instantiation of
service proxies, registration of service contexts, and so forth.
• Transparent integration with ACS and BOCS for distributed content.
• Transparent invocation of UCF and handling of UCF content transfer.
• Support classes for Kerberos and other SSO authentication.
• The ability to execute the service either remotely via web services or locally within the same
JVM (Java only).
The Java productivity layer is based on the Sun reference implementation of JAX-WS. The Java
productivity layer can be used either as a consumer of DFS web services (remote mode), or a local
consumer, running in the same JVM as the DFS Java services.
The .NET productivity layer is based on Microsoft Windows Communication Framework (WCF) and
has functional parity with the Java productivity layer.
the term with DFC operations—in DFS the term is used generically for any method exposed by
the service.
DFS services generally speaking expose a just a few service operations. The operations generally have
simple signatures. For example the Object service update operation has this signature:
DataPackage update(DataPackage dataPackage, OperationOptions options)
However, this “simple” operation provides a tremendous amount of power and flexibility. It’s just
that the complexity has moved from the “verbs” (the number of methods and the complexity of the
method signature) to the “nouns” (the objects passed in the operation). The operation makes a lot
of decisions based on the composition of the objects in the DataPackage and relationships among
those objects, and on profiles and properties provided in the operationOptions parameter or set in
the service context—these settings are used to modify the default assumptions made by the service
operation. This approach helps to minimize chattiness in a distributed application. The DFS client
spends most of its effort working with local objects, rather than in conversation with the service API.
XML Support
Documentum’s XML support has many features. The XML support provided by DFS is similar to the
way in which DFC supports XML. For more information on XML processing options (import/export)
and XML support, refer to DFC Development Guide.
Using XML support requires you to provide a controlling XML application. When you import an
XML document, DFC examines the controlling application’s configuration file and applies any
chunking rules that you specify there. If the application’s configuration file specifies chunking rules,
DFC creates a virtual document from the chunks it creates. It imports other documents that the XML
document refers to as entity references or links and makes them components of the virtual document.
It uses attributes of the containment object associated with a component to remember whether it
came from an entity or a link and to maintain other necessary information. Assembly objects have the
same XML-related attributes as containment objects do. The processed XML files are imported in
Content Server as virtual documents and therefore, in order to retrieve the XML files, you must use
methods that are applicable for processing virtual documents.
DFC provides substantial support for the Documentum XML capabilities. XML processing by DFC is
largely controlled by configuration files that define XML applications. Refer to the XML Application
Development Guide for information about working with content in XML format.
The following declaration sets an application name:
ContentTransferProfile.setXMLApplicationName(String xmlApplicationName);
If no XML application is provided, DFC will use the default XML application for processing. To
disable XML processing, set the application name to Ignore.
Use the UCF mode for importing XML files with external links and uploading external files. If you use
other content transfer modes, only the XML file will be imported and the links will not be processed.
DFS tools
The DFS tools provide functionality for creating services based on Java source code (“code first”),
services based on WSDL (“WSDL first”), or client runtime support for existing services based on
WSDL. These tools can be used through a Composer interface, or scripted using Ant.
DFS services can be implemented as POJOs (Plain Old Java Objects), or as BOF (Business Object
Framework) service-based business objects (SBOs). The service-generation tools build service
artifacts that are archived into a deployable EAR file for remote execution and into JAR files for
local execution using the optional client runtime. C# client-side proxies are generated using the
DFS Proxy Generator utility.
For information on using the tools through the Composer interface, refer to the Composer User Guide.
The following table describes the supported Ant tasks that can be used for tools scripting:
To avoid out of memory errors when running DFS build tools in Ant, set the ANT_OPTS
environment variable. For running in a UNIX shell, DOS, or Cygwin, set ANT_OPTS to “-Xmx512m
-XX:MaxPermSize=128m” to solve an OutOfMem and PermGen space error.
For running in Eclipse, set the Window -> Preferences -> Java -> Installed JREs -> Edit > Default VM
Parameter to “Xmx512m -XX:MaxPermSize=128m” .
For more information on building custom services and the build tools, see Chapter 8, Custom Service
Development with DFS.
This chapter provides information about how to consume DFS services remotely using web services
frameworks and the DFS WSDL interface (or for that matter any Enterprise Content Services
WSDL), without the support of the client productivity layer. The chapter will present some concrete
examples using Axis2 to illustrate a general approach to DFS service consumption that is applicable
to other frameworks. Documentation of an additional Java sample that uses JAX-WS RI and
demonstrates content transfer using the Object service, is available on the EMC Developer Network
(EMC) at https://community.emc.com/docs/DOC-3038. A .NET consumer sample is available at
https://community.emc.com/docs/DOC-2604.
Principal issues
The following issues apply generally to DFS consumers that do not use the client productivity layer.
Proxy generation
The sample consumers presented here use frameworks that generate static proxies that are bound
to data types that defined in the WSDL and associated XML schemas. Using these classes, the
framework takes care of marshalling the data that they contain into XML included in SOAP requests
and unmarshalling objects in SOAP responses into instances of the proxy classes. This frees the
programmer to work with the proxy classes in an object-oriented manner and exchange instances of
the types with remote operation as if it were a local method. The samples presented here use static
proxies, so generating the proxies is one of the first steps in creating a consumer. Some frameworks,
such as Apache CXF, support dynamic proxies that are created at runtime.
Exception handling
If you are creating a Java consumer, you have the option of having the full stack trace of service
exceptions serialized and included in the SOAP response. You can turn this on by setting the
dfs.exception.include_stack_trace property in the service context to true. By default, the server
returns the top level exception and stack trace, but not the entire exception chain. Non-Java clients
should use the default setting.
Content transfer
DFS supports a number of different options for transferring content to and from the repository. The
sample in this chapter does not illustrate content transfer. The topic is covered in Chapter 10, Content
Transfer and Chapter 11, Content Transfer with Unified Client Facilities.
consume services in a stateless manner by passing service context information in the SOAP header.
The AxisDfsConsumer.java sample demonstrates how to write a consumer that registers the service
context and passes a security token in the SOAP header. The following sections walk you through
the steps of re-creating these samples:
• Configuring the classpath, page 23
• Generating the client proxies, page 23
• Writing a consumer that registers the service context, page 24
• Writing a consumer that does not register the service context, page 28
As a convenience, an Ant build.xml file is provided for the Axis samples. You can use this file to
compile and run the samples instead of carrying out the tasks in the following sections manually. For
more information see, Running the Axis samples, page 30
2. Generate the proxies for each service that you want to consume with the following command. For
the samples to work correctly, generate proxies for the SchemaService:
wsdl2java -o javaOutputDirectory -d jaxbri
-uri http://host:port/services/
module/ServiceName?wsdl
Before calling a service, you need to create a ServiceContext object that contains information such
as user credentials and profiles. The following code sample shows a simple method that calls the
Context Registry Service given a ServiceContext object. The return value is the security token that
you need to inject into the SOAP header.
new ContextRegistryServiceStub(this.contextRegistryURL);
RegisterResponse response = stub.register(register);
return response.getReturn();
}
The following SOAP message is the SOAP message request when a context gets registered:
<?xml version="1.0" ?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Body>
<ns7:register
xmlns:ns2="http://properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://context.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://core.datamodel.fs.documentum.emc.com/"
xmlns:ns7="http://services.rt.fs.documentum.emc.com/">
<context>
<ns4:Identities xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:RepositoryIdentity" repositoryName="repository"
password="password" userName="username"/>
</context>
<host>http://host:port/services/core
</host>
</ns7:register>
</S:Body>
</S:Envelope>
When registering a service context with the Context Registry service, a security token is returned that
you can use to refer to the service context that is stored on the server. You can then use the security
token in subsequent requests to the service, so the consumer does not have to re-send service context
information over the wire. To utilize this feature, you must inject the security token into the SOAP
header before calling the service. To do this, create a WS-Security compliant header that contains
the security token. Once the header is created, you can add it to the SOAP header. The following
code sample shows a method that creates the security header given the security token that was
returned by the Context Registry Service:
tokenElement.setText(token);
securityElement.addChild(tokenElement);
return securityElement;
}
The following snippet of XML is what the security header should look like. The value for the
BinarySecurityToken element is the token that was returned by the Context Registry Service.
<wsse:Security xmlns:wsse=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-
secext-1.0.xsd">
<wsse:BinarySecurityToken
QualificationValueType="http://schemas.emc.com/documentum#ResourceAccessToken"
xmlns:wsu=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-
utility-1.0.xsd" wsu:Id="RAD">
hostname/123.4.56.789-123456789123-45678901234567890-1
</wsse:BinarySecurityToken>
</wsse:Security>
After obtaining the security token and creating the security header, you can then begin to code
your consumer logic and setup any data model objects that the service operation requires. The
JaxWsDfsConsumer sample is intentionally simple and does not require much preparation for the
call to the getRepositoryInfo operation of the Schema service.
Once you have the input that is required by the service operation, you can set up the call to the
service. The following code sample shows how to instantiate a service, get its port, and call an
operation on the port. The code also shows how to set the outbound headers for the request to the
service, which is important because it adds the security header to the request.
The following SOAP message gets sent as the request when calling the Schema Service’s
getRepositoryInfo operation:
<?xml version="1.0" ?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Header>
<wsse:Security xmlns:wsse=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-
secext-1.0.xsd">
<wsse:BinarySecurityToken
QualificationValueType="http://schemas.emc.com/
documentum#ResourceAccessToken"
xmlns:wsu=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-
utility-1.0.xsd"
wsu:Id="RAD">
hostname/123.4.56.789-123456789123-45678901234567890-1
</wsse:BinarySecurityToken>
</wsse:Security>
</S:Header>
<S:Body>
<ns7:getRepositoryInfo
xmlns:ns2="http://properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://schema.core.datamodel.fs.documentum.emc.com/"
xmlns:ns7="http://core.services.fs.documentum.emc.com/">
<repositoryName>repository</repositoryName>
</ns7:getRepositoryInfo>
</S:Body>
</S:Envelope>
If you do not register the service context, you have to pass in the credentials along with any other
service context information with every call to a service operation. The state of the service is not
kept on the server so you must maintain the service context on the client, and merging of service
contexts on the server is not possible. The following code snippet shows how to create a simple
service context object.
RepositoryIdentity identity = new RepositoryIdentity();
identity.setUserName(user);
identity.setPassword(password);
identity.setRepositoryName(repository);
ServiceContext context = new ServiceContext();
context.getIdentities().add(identity);
After setting the credentials along with any other desired information in the service context, you can
begin to code your consumer logic and setup any data model objects that the service operation
requires. The JaxWsDfsConsumer sample is intentionally simple and does not require much
preparation for the call to the getRepositoryInfo operation of the Schema service.
Once you have the input that is required by the service operation, you can setup the call to the service.
The following code sample shows how to instantiate a service, get its port, and call an operation on
the port. The code also shows the service context being set in the outbound header of the request to
the service. This places the service context information, most notably the credentials, in the SOAP
header so the service can authenticate the consumer. All other desired service context information
must be present in every call to the service as it is not cached on the server.
The following SOAP message gets sent as the request when calling the Schema Service’s
getRepositoryInfo operation:
<?xml version="1.0" ?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Header>
<ns4:ServiceContext
xmlns:ns2="http://properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://context.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://core.datamodel.fs.documentum.emc.com/"
token="temporary/USXXLYR1L1C-1210202690054">
<ns4:Identities xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="ns4:RepositoryIdentity" repositoryName="repository"
password="password" userName="username"/>
</ns4:ServiceContext>
</S:Header>
<S:Body>
<ns7:getRepositoryInfo
xmlns:ns2="http://properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://schema.core.datamodel.fs.documentum.emc.com/"
xmlns:ns7="http://core.services.fs.documentum.emc.com/">
<repositoryName>repository</repositoryName>
</ns7:getRepositoryInfo>
</S:Body>
</S:Envelope>
<?xml version="1.0" ?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Header>
<ns4:ServiceContext
xmlns:ns2="http://properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://context.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://core.datamodel.fs.documentum.emc.com/"
token="temporary/USXXLYR1L1C-1210201103234">
<ns4:Identities xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:RepositoryIdentity" repositoryName="techpubs"/>
</ns4:ServiceContext>
</S:Header>
<S:Body>
<ns7:getSchemaInfo
xmlns:ns2="http://properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://schema.core.datamodel.fs.documentum.emc.com/"
xmlns:ns7="http://core.services.fs.documentum.emc.com/">
<repositoryName>techpubs</repositoryName>
<schemaName>DEFAULT</schemaName>
</ns7:getSchemaInfo>
</S:Body>
</S:Envelope>
This chapter describes how to run the Java consumers provided with the DFS SDK that utilize the Java
productivity layer. The Java productivity layer is an optional client library that provides convenience
classes to make it easier to consume DFS services using Java. For more general information about Java
productivity layer consumers see Chapter 4, Consuming DFS with the Java DFS Productivity Layer.
• Verifying prerequisites for the samples, page 33
• Local and remote consumers, page 35
• Running your first sample consumer, page 35
• Exploring the DFS service consumer samples, page 36
Note: The LifecycleService samples require installing sample data on your test repository.
Before running these samples, install the Documentum Composer project contained in
./csdata/LifecycleProject.zip to your test repository using Documentum Composer, or install the DAR
file contained in the zip archive using the darinstaller utility that comes with Composer. Documentum
Composer User Guide provides more information on how to use a composer to install a DAR file on
the repository.
A few samples, such as the VersionControlService, depend on the availability of ACS server. If
you have problems with VersionControlService samples or ObjectService samples, ensure ACS is
available. (For more information, refer to Distributed Configurations section in the Documentum
Administrator User Guide.)
For example, if your services are running on localhost at port 8888 and using the default http protocol
(rather than https), the address of the Object service would be
http://localhost:8888/services/core/ObjectService
DFS services are organized into several different service modules. The central module, named “core”,
contains the Object service and other fundamental Documentum platform services. Unless you
specifically change the shipped configuration settings, “core” will be the default module.
To test whether you have the correct endpoint address and whether the DFS server is running
and available, you can open a service WSDL in a browser. For example, if you want to test a DFS
instance deployed with Content Server on a host with the DNS name MyContentServer, you could
open the QueryService WSDL in your browser using the following URL (DFS on a Content Server
installation is always at port 9080):
http://MyContentServer:9080/services/core/QueryService?wsdl
If the WSDL does not come up in your browser, make sure that the Java Method Server is running
on the Content Server host.
the following hard coded information. (Of course in a real application you would never hard
code these settings—we do it this way in the samples for the sake of simplicity.)
• repository
• userName
• password
• host
/************************************************************
* You must supply valid values for the following fields: */
2. Run the following command to delete any previously compiled classes and compile the Java
samples project:
ant clean compile
If you are running the samples using Ant, read the following sections:
• Overview of the consumer samples, page 37
• Setting hard coded values in the sample code, page 38
• Configuring DFS client properties (remote mode only), page 39
• Configuring dfc.properties (local mode only), page 39
• Compiling and running the samples using Ant, page 40
5. Under Project>Properties>Java Compiler, make sure that JDK compliance is set to 5.0, then
click OK.
You should now be able to compile and run the samples in Eclipse after you have configured the
samples correctly, which will be discussed in Setting hard coded values in the sample code, page 38
and Configuring DFS client properties (remote mode only), page 39.
1. Edit %SAMPLES_
LOC%\test\com\emc\documentum\fs\doc\test\client\SampleContentManager.java and
specify the values for the gifImageFilePath and gifImage1FilePath variables. The consumer
samples use these files to create test objects in the repository. Two gif images are provided in the
%SAMPLES_LOC%\content folder, so you can set the variables to point to these files.
2. Edit the %SAMPLES_LOC%\test\com\emc\documentum\fs\doc\test\client\DFSTestCase.
java file and specify the values for repository, userName, password, and remoteMode. You can
optionally specify a value for the toRepository variable if you want to run a test such as copying
an object from one repository to another. If you do not want to run the test, you must set the
value of toRepository variable to null.
Note: There is more than one copy of this file on the SDK, so make sure you edit the one in
%SAMPLES_LOC%\etc.
For more information see Configuring service addressing (remote consumers only), page 48.
Also, to run the workflow service tests, you must specify correct values for dfc.globalregistry.
username, dfc.globalregistry.password, and dfc.globalregistry.repository. If you are running the
samples in remote mode, you do not have to edit your local copy of dfc.properties. The dfc.properties
file that is on the server is used, which was configured during installation of DFS.
Note: The dfc.globalregistry.password setting stored in dfc.properties is encrypted by the DFS
installer, so the easiest way to get this value is from the dfc,properties created by the installer. You can
find it within the emc-dfs.ear file deployed on the application server in the APP-INF\classes directory.
1. If necessary, modify the path to the DFS SDK root directory in the Ant %SAMPLES_LOC%/build.
properties file. In the provided file, this value is set as follows—modify it as required:
dfs.sdk.home=c:/emc-dfs-sdk-6.7/
2. Open a command prompt, change to the %SAMPLES_LOC% directory and execute the following
command:
ant clean compile
3. Execute the info target for information about running the samples.
ant info
This will print information to the console about available Ant targets and available samples to run.
Buildfile: build.xml
[echo] EMC DFS SDK home is 'c:/emc-dfs-sdk-6.7/'
[echo] This project home is
'C:\emc-dfs-sdk-65\samples\JavaConsumers\DesktopProductivityLayer'
--beware spaces in this path (JDK issues).
info:
[echo] Available tasks for the project
[echo] ant clean - to clean the project
[echo] ant compile - to compile the project
[echo] ant run -Dtest.class=<class name> - to run a test class
[echo] Available test classes for run target:
[echo] TAccessControlService
[echo] TDriver
[echo] TLifecycleService
[echo] TObjServiceCopy
[echo] TObjServiceCreate
[echo] TObjServiceGet
[echo] TObjServiceDelete
[echo] TObjServiceMove
[echo] TObjServiceUpdate
[echo] TQueryServicePassthrough
[echo] TSchemaServiceDemo
[echo] TSearchService
[echo] TVersionControlServiceDemo
[echo] TVirtualDocumentService
[echo] TWorkflowService
The classes listed are all in the com.emc.documentum.fs.doc.test.client package; these classes all
contain the main methods that drive the consumer sample implementations.
The TDriver class runs all of the tests by calling the main method of each of the test classes. You
can edit the TDriver class and comment out any tests that you do not want to run.
4. Run any of the classes, listed above, individually using the ant run target as follows:
ant run -Dtest.class=<className>
The DFS productivity layer contains a set of Java libraries that assist in writing DFS consumers.
Using the DFS productivity layer is the easiest way to begin consuming DFS. This chapter covers
the following topics:
• Local and remote Java consumers, page 43
• Configuring Java dependencies for DFS productivity-layer consumers, page 45
• Configuring dfc.properties, page 47
• Creating a service context in Java, page 49
• Instantiating a service in Java, page 54
• OperationOptions, page 79
• WSDL-first consumption of services, page 56
In a local DFS client, both the consumer and service run on the same Java virtual machine. DFS uses
a local DFC client to interact with Content Server. Consumer code invokes DFS services using the
productivity layer, and does not invoke classes on the DFC layer.
Local DFS deployment is a mainstream application topology, and is particularly prevalent in Java web
applications, because it integrates the DFS client directly into the web application, which is simpler
and more efficient that consuming web services over a remote connection.
Necessarily, a local DFS consumer differs in some important respects from a remote consumer. In
particular note the following:
• Service context registration (which sets state in the remote DFS service) has no meaning in a local
context, so registering the service context does nothing in a local consumer.
• Content transfer in a local application is completely different from content transfer in a remote
application. Remote content transfer protocols (MTOM, Base64, and UCF) are not used by a local
consumer. Instead, content is transferred by the underlying DFC client. For more information
see Content types returned by DFS, page 164.
Framework-only consumer
If you are writing a consumer of a custom service that does not need to invoke any standard DFS
services, and does not expose the DFS data model, your project does not need to include JARs from
Applications that use core services and the core data model should also include on their classpath, in
addition to the core services and runtime jars:
• emc-search-services.jar (or emc-search-services-remote.jar), which includes classes required by
the Query and QueryStore services.
• emc-collaboration-services.jar (or emc-collaboration-services-remote.jar), which includes classes
related to the RichText object.
Configuring dfc.properties
When you call DFS locally with the Java productivity layer, DFS uses the DFC client that is bundled in
the DFS SDK. This DFC client is configured in a dfc.properties file that must be located on the project
classpath (you can start with the copy that is provided in the %DFS_SDK/etc directory).
For remote execution of DFS services, you do not have to configure a local copy of dfc.properties..
DFS uses the DFC client that is bundled in the dfs.ear file that is deployed with Content Server,
or on a standalone application server. In these cases, the minimum dfc.properties settings for the
connection broker and global registry are set during the DFS installation. If you do not use the DFS
installation program you will need to configure dfc.properties in the EAR file—see the Documentum
Foundation Services Deployment Guide for details
To configure dfc.properties:
1. At a minimum, provide values for the dfc.docbroker.host[0] and dfc.docbroker.port[0] for the
connection broker.
2. To run services that require an SBO, you will need to provide values for dfc.globalregistry.
username, dfc.globalregistry.repository, and dfc.globalregistry.password.
3. Add the folder where dfc.properties is located to your classpath. For example, if you are using
the %DFS_SDK%/etc/dfc.properties file, add %DFS_SDK%/etc to your classpath.
Note: For more information on SBOs, see the Documentum Foundation Classes Development Guide.
• contextRoot—the root address under which service modules are organized; the contextRoot for
DFS-provided services is “services”
• name—the name of the service module, under which a set of related services are organized
The fully-qualified service address is constructed as runtime as follows:
<protocol>://<host>:<port>/<contextRoot>/<module>/<serviceName>
For example:
http://dfsHostName:8888/services/core/ObjectService
The defaultModuleName value is used when no module name is provided by the ServiceFactory
getRemoteService method. The registryProviderModuleName value is used to specify the location of
the ContextRegistryService where the service context will be registered if the module name is not
explicitly provided by the ContextFactory register method.)
{
ContextFactory contextFactory = ContextFactory.getInstance();
IServiceContext context = contextFactory.newContext();
RepositoryIdentity repoId = new RepositoryIdentity();
repoId.setRepositoryName(repositoryName);
repoId.setUserName(userName);
repoId.setPassword(password);
context.addIdentity(repoId);
return context;
}
Identities
A service context contains a collection of identities, which are mappings of repository names onto
sets of user credentials used in service authentication. A service context is expected to contain only
one identity per repository name. Identities are set in a service context using one of the concrete
Identity subclasses:
• BasicIdentity directly extends the Identity parent class, and includes accessors for user name
and password, but not for repository name. This class can be used in cases where the service is
known to access only a single repository, or in cases where the user credentials in all repositories
are known to be identical. BasicIdentity can also be used to supply fallback credentials in the
case where the user has differing credentials on some repositories, for which RepositoryIdentity
instances will be set, and identical credentials on all other repositories. Because BasicIdentity does
not contain repository information, the username and password is authenticated against the
global registry. If there is no global registry defined, authentication fails.
• RepositoryIdentity extends BasicIdentity, and specifies a mapping of repository name to a set of
user credentials, which include a user name, password, and optionally a domain name if required
by your network environment. In a RepositoryIdentity, you can use the “*” wildcard (represented
by the constant RepositoryIdentity.DEFAULT_REPOSITORY_NAME) in place of the repository
name. In this case DFS will authorize the credentials against the global registry. If no global
registry is available, or if the credentials are not valid on the global registry, the authentication
fails. Using this wildcard in a RepositoryIdentity is essentially the same as using a BasicIdentity.
• SsoIdentity allows an SSO solution to be used to authenticate the user. You can use SsoIdentity
class when the service that you are requesting is accessing only one repository, or if the user
credentials in all repositories are identical. Because SsoIdentity does not contain repository
information, the username and password is authenticated against the designated global registry.
If there is no global registry defined, authentication fails.
• BinaryIdentity is used only in a local Java client and is not serializable. It is used to encapsulate
Kerberos credentials (see Kerberos authentication in a local DFS web application, page 206).
• PrincipalIdentity is used to indicate that DFC principal mode login should be used with the user
name provided in the identity instance. PrincipalIdentity is not XML serializable, so it will not
be sent over the wire. For security reasons, it will work only when the DFS service is invoked
in local mode.
Locale
The locale property of an IServiceContext object specifies the language and optionally country
setting to use for locale-sensitive features. The locale is used, for example, to control which
NLS-specific Data Dictionary strings will be provided by Content Server to the DFS layer. The
format of the locale string value is based on Java locale strings, which in turn are based on
ISO 639-1 two-character, lowercase language codes and ISO 3166 country codes. The format
of a Java locale string is <languagecode>[_<countrycode>]; for example, the Java locale string
for British English is "en_GB". Further information on Java locale strings can be found at
http://java.sun.com/developer/technicalArticles/J2SE/locale/.
To find out the locale codes currently supported by Content Server at installation, refer to the
documentation for the locale_name property of the dm_server_config object in the EMC Documentum
System Object Reference Manual.
If the locale is not set in the service context, the DFS server runtime will use the value set in the DFS
server application. Typically this means that a DFS client (particularly a remote client) should set
the locale to the locale expected by the user, rather than relying on the value set on the server. The
locale setting used by the DFS server can be specified in the dfc.locale property of dfc.properties. If
the value is not set in the service context by the client and not set on the server, the DFS server will
use the locale of the JVM in which it is running.
Transaction support
DFS provides basic support for transactions. If transactions are enabled operations will use implicit
transactional behavior; that is, they will begin the transaction at the start of the operation and commit
the transaction at the end of the operation if it completes successfully. If the any part of the operation
fails, the entire operation will be rolled back.
To enable transactions, set the USER_TRANSACTION_HINT runtime property in the service context
to IServiceContext.TRANSACTION_REQUIRED.
The expected behavior is that the payload policy must be honored first, then the transaction policy.
For example, suppose that we use the Object service to create objects based on a DataPackage that has
two DataObject trees. We use PAYLOAD_CONTINUE_ON_EXCEPTION with transaction support
to create the objects. At runtime, a leaf in the first DataObject tree fails and all others succeed. In
this case only the objects in the second DataObject tree would be created; the creation of the first
DataObject tree would be rolled back. If no transaction support were used, some leaves from the first
DataObject tree would be created, as well as the entire second DataObject tree.
OperationOptions
DFS services generally take an OperationOptions object as the final argument when calling a service
operation. OperationOptions contains profiles and properties that specify behaviors for the operation.
The properties have no overlap with properties set in the service context’s RuntimeProperties. The
profiles can potentially overlap with properties stored in the service context. In the case that they do
overlap, the profiles in OperationOptions always take precedence over profiles stored in the service
context. The profiles stored in the service context take effect when no matching profile is stored in the
OperationOptions for a specific operation. The override of profiles in the service context takes place
on a profile-by-profile basis: there is no merge of specific settings stored within the profiles.
As a recommended practice, a service client should avoid storing profiling information or properties
in the service operation that are likely to be modified by specific service instances. This avoids
possible side-effects caused by modifications to a service context shared by multiple services. It is
likely that ContentTransferProfile will not change and so should be included in the service context.
Other profiles are better passed within OperationOptions.
OperationOptions are discussed in more detail under the documentation for specific service
operations. For more information on core profiles, see PropertyProfile, page 93, ContentProfile, page
97, PermissionProfile, page 101, and RelationshipProfile, page 110. Other profiles are covered under
specific services in the Enterprise Content Services Reference.
This chapter has two goals. The first is to show you a basic DFS consumer, invoke a service, and get
some results. This will let you know whether your environment is set up correctly, and show you the
basic steps required to code a DFS consumer.
The second goal is to show you how to set up and run the DFS documentation samples. You may
want to debug the samples in Visual Studio to see exactly how they work, and you may want to
modify the samples or add samples of your own to the sample project.
• Verifying prerequisites for the samples, page 57
• Setting up the .NET solution, page 59
• Examine and run the HelloDFS project, page 60
• Set up and run the documentation samples, page 63
For example, if your services are running on localhost at port 8080 and using the default http protocol
(rather than https), the address of the Object service would be
http://localhost:8080/services/core/ObjectService
DFS services are organized into several different service modules. The central module, named “core”,
contains the Object service and other fundamental Documentum platform services. Unless you
specifically change the shipped configuration settings, “core” will be the default module.
To test whether you have the correct endpoint address and whether the DFS server is running and
available, you can open a service WSDL in a browser. For example, if you want to test a DFS instance
deployed with Content Server on a host named MyContentServer, you could open the QueryService
WSDL in your browser using the following URL:
http://MyContentServer:9080/services/core/QueryService?wsdl
If the WSDL does not come up in your browser, make sure that the Java Method Server is running
on the Content Server host.
then DFS will have been configured to point to the connection broker running on that Content Server
installation. If the EAR or WAR file was manually deployed to an application server, the connection
broker host and port should have been set manually as part of the deployment procedure. For more
details, see the Documentum Foundation Services Deployment Guide.
Examine QueryServiceTest.cs
In Visual Studio, open and examine the source file QueryServiceTest.cs, which is the sole class
in the HelloDFS project. The QueryServiceTest class creates a service context, invokes the DFS
QueryService, and prints the results of the query to the console.
The private method getSimpleContext returns a ServiceContext, which the DFS .NET client runtime
uses to encapsulate and process data that is passed in the SOAP header to the remote service. At
minimum the ServiceContext needs to contain an identity consisting of a repository name, user name,
and user password, that will enable the DFS server-side runtime to connect to and authenticate
on a repository.
/*
* This routine returns up a service context
* which includes the repository name and user credentials
*/
private IServiceContext getSimpleContext()
{
/*
* Get the service context and set the user
* credentials and repository information
*/
ContextFactory contextFactory = ContextFactory.Instance;
IServiceContext serviceContext = contextFactory.NewContext();
RepositoryIdentity repositoryIdentity =
new RepositoryIdentity(repository, userName, password, "");
serviceContext.AddIdentity(repositoryIdentity);
return serviceContext;
}
When the QueryService is invoked, the DFS client-side runtime will serialize data from the local
ServiceContext object and pass it over the wire as a SOAP header like the one shown here:
<s:Header>
<ServiceContext token="temporary/127.0.0.1-1205168560578-476512254"
xmlns="http://context.core.datamodel.fs.documentum.emc.com/">
<Identities xsi:type="RepositoryIdentity"
userName="MyUserName"
password="MyPassword"
repositoryName="MyRepositoryName"
domain=""
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
<RuntimeProperties/>
</ServiceContext>
</s:Header>
The CallQueryService method does most of the work in the sample class, including the essential piece
of invoking the remote service through the service proxy object.
It begins by instantiating an IQueryService object representing the remote service (the service proxy).
This object encapsulates the service context described in the preceding section.
/*
* Get an instance of the QueryService by passing
* in the service context to the service factory.
*/
ServiceFactory serviceFactory = ServiceFactory.Instance;
IServiceContext serviceContext = getSimpleContext();
IQueryService querySvc
= serviceFactory.GetRemoteService<IQueryService>(serviceContext,
moduleName, address);
Next, CallQueryService constructs two objects that will be passed to the Execute method: a
PassthroughQuery object that encapsulates a DQL statement string, and a QueryExecution object,
which contains service option settings. Both objects will be serialized and passed to the remote
service in the SOAP body.
/*
* Construct the query and the QueryExecution options
*/
PassthroughQuery query = new PassthroughQuery();
query.QueryString = "select r_object_id, object_name from dm_cabinet";
query.AddRepository(repository);
QueryExecution queryEx = new QueryExecution();
queryEx.CacheStrategyType = CacheStrategyType.DEFAULT_CACHE_STRATEGY;
CallQueryService then calls the Execute method of the service proxy, which causes the runtime
to serialize the data passed to the proxy Execute method, invoke the remote service, and receive
a response via HTTP.
/*
* Execute the query passing in operation options
* This sends the SOAP message across the wire
* Receives the SOAP response and wraps the response in the
* QueryResult object
*/
OperationOptions operationOptions = null;
QueryResult queryResult = querySvc.Execute(query, queryEx, operationOptions);
The complete SOAP message passed to the service endpoint is shown here:
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Header>
<ServiceContext token="temporary/127.0.0.1-1205239338115-25203285"
xmlns="http://context.core.datamodel.fs.documentum.emc.com/">
<Identities xsi:type="RepositoryIdentity"
userName="MyUserName"
password="MyPassword"
repositoryName="MyRepositoryName"
domain=""
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
<RuntimeProperties/>
</ServiceContext>
</s:Header>
<s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<execute xmlns="http://core.services.fs.documentum.emc.com/">
<query xsi:type="q1:PassthroughQuery"
queryString="select r_object_id, object_name from dm_cabinet"
xmlns=""
xmlns:q1="http://query.core.datamodel.fs.documentum.emc.com/">
<q1:repositories>techpubs</q1:repositories>
</query>
<execution startingIndex="0"
maxResultCount="100"
maxResultPerSource="50"
cacheStrategyType="DEFAULT_CACHE_STRATEGY"
xmlns=""/>
</execute>
</s:Body>
</s:Envelope>
The remainder of the CallQueryService method examines the QueryResult and prints information
about its contents to the console.
/***********************************************************/
To specify the service endpoint address, replace HostName with the IP address or host name of
the machine where DFS is deployed, and replace PortNumber with the port number where the
DFS application is deployed. For example, if DFS is deployed with Content Server on a machine
called MyContentServer, the value of address will be
http://MyContentServer:9080/services
If DFS was deployed on its own tier, the port name will depend on the deployment environment,
typically port 8080:
http://localhost:8080/services
For more information see Verify repository and login information, page 35.
If you do this, you should delete the created sample data yourself after running each sample to avoid
errors related to duplicate object names when running successive samples.
If more than one client is going to test the samples against the same repository, you should create a
unique name for the test cabinet that gets created on the repository by the sample runner. Do this by
changing the testCabinetPath constant (for example, by replacing XX with your initials).
// if multiple developers are testing the samples on the same repository,
// create a unique name testCabinetPath to avoid conflicts
public const String testCabinetPath = "/DFSTestCabinetXX";
3. If you are going to run multiple samples, open SampleContentManager.cs and make sure that
isDataCleanedUp is initialized to true. This will prevent duplicate filename errors.
4. Display the Output window.
5. Build and run the DotNetSampleRunner project in debug mode.
The .NET productivity layer is functionally identical to the Java productivity layer, except that the
.NET productivity layer supports only remote service invocation. Note that while DFS samples are in
C#, the .NET library is CLS compliant and can be used by any CLS-supported language.
This chapter covers the following topics:
• Configuring .NET consumer project dependencies, page 67
• Configuring a .NET client, page 68
• Creating a service context in .NET, page 73
• Instantiating a service in .NET, page 77
• Transaction support, page 78
• OperationOptions, page 79
• Handling SOAP faults in the .NET productivity layer, page 79
• Emc.Documentum.FS.DataModel.Collaboration
• Emc.Documentum.FS.Services.Bpm
• Emc.Documentum.FS.Services.CI
• Emc.Documentum.FS.Services.Collaboration
• Emc.Documentum.FS.Services.Core
• Emc.Documentum.FS.Services.Search
<bindings>
<basicHttpBinding>
<binding name="DfsAgentService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
<binding name="DfsContextRegistryService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
<binding name="DfsDefaultService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
</basicHttpBinding>
</bindings>
</system.serviceModel>
</configuration>
The configuration file contains settings that are DFS-specific, as well as settings that are WCF-specific,
but which impact DFS behavior. The DFS-specific settings are those within the <Emc.Documentum>
<FS> tags. The remaining settings (within <basicHttpBinding>) are specific to Microsoft WCF.
For documentation of the Microsoft settings refer to http://msdn.microsoft.com/en-us/library/
ms731361.aspx.
The ConfigObject section includes the following DFS-specific attributes:
• defaultModuleName—the module name to use if no module is specified in the service
instantiation method
• registryProviderModuleName—the module that includes the ContextRegistryService (under
normal circumstances leave this set to “core”)
• requireSignedUcfJars—sets whether the .NET runtime requires that UCF-related JAR files
downloaded from the DFS server be signed; default is “true”; normally this is not needed, but
it must be set to false if the client runtime is version 6.5 or higher and the service runtime is
version 6 (which does not have signed UCF JARs).
The ModuleInfo elements have properties that together describe the address of a module (and of the
services at that address), using the following attributes:
• protocol—either http or https, depending on whether the application server is configured to
use SSL.
• host—the DNS name or IP address of the service host.
• port—the port number at which the DFS application server listens. When DFS is installed with
Content Server, the port defaults to 9080.
• contextRoot—the root address under which service modules are organized; the contextRoot for
DFS-provided services is “services”
• name—the name of the service module, under which a set of related services are organized
For example:
http://dfsHostName:8080/services/core/ObjectService
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName" algorithmSuite=
"Default" />
</security>
</binding>
If you are concerned about preventing users from declaring too small a value for such attributes,
programmatically check and override the declared values, as follows:
System.Reflection.FieldInfo appConfigInfo = typeof(ContextFactory)
.GetField("appConfig",
System.Reflection.BindingFlags.Instance | System.Reflection.
BindingFlags.NonPublic);
System.Reflection.FieldInfo agentServiceBindingInfo = typeof(AppConfig)
.GetField("m_agentServiceBinding",
System.Reflection.BindingFlags.Instance | System.Reflection.
BindingFlags.NonPublic);
System.Reflection.FieldInfo contextRegistryServiceBindingInfo =
typeof(AppConfig).GetField
("m_contextRegistryServiceBinding",
System.Reflection.BindingFlags.Instance | System.Reflection.
BindingFlags.NonPublic);
System.Reflection.FieldInfo defaultServiceBindingInfo =
typeof(AppConfig).GetField("m_defaultServiceBinding",
System.Reflection.BindingFlags.Instance | System.Reflection.
BindingFlags.NonPublic);
BasicHttpBinding binding = new BasicHttpBinding();
binding.MaxReceivedMessageSize = 0x7fffffffL;
binding.MaxBufferSize = 0x7fffffffL;
agentServiceBindingInfo.SetValue(appConfigInfo.
GetValue(contextFactory), binding);
contextRegistryServiceBindingInfo.SetValue(appConfigInfo.
GetValue(contextFactory),binding);
defaultServiceBindingInfo.SetValue(appConfigInfo.
GetValue(contextFactory), binding);
ContentTransferProfile contentTransferProfile =
new ContentTransferProfile();
contentTransferProfile.TransferMode = ContentTransferMode.MTOM;
serviceContext.SetProfile(contentTransferProfile);
}
Identities
A service context contains a collection of identities, which are mappings of repository names onto
sets of user credentials used in service authentication. A service context is expected to contain only
one identity per repository name. Identities are set in a service context using one of the concrete
Identity subclasses:
• BasicIdentity directly extends the Identity parent class, and includes accessors for user name
and password, but not for repository name. This class can be used in cases where the service is
known to access only a single repository, or in cases where the user credentials in all repositories
are known to be identical. BasicIdentity can also be used to supply fallback credentials in the
case where the user has differing credentials on some repositories, for which RepositoryIdentity
instances will be set, and identical credentials on all other repositories. Because BasicIdentity does
not contain repository information, the username and password is authenticated against the
global registry. If there is no global registry defined, authentication fails.
• RepositoryIdentity extends BasicIdentity, and specifies a mapping of repository name to a set of
user credentials, which include a user name, password, and optionally a domain name if required
by your network environment. In a RepositoryIdentity, you can use the “*” wildcard (represented
by the constant RepositoryIdentity.DEFAULT_REPOSITORY_NAME) in place of the repository
name. In this case DFS will authorize the credentials against the global registry. If no global
registry is available, or if the credentials are not valid on the global registry, the authentication
fails. Using this wildcard in a RepositoryIdentity is essentially the same as using a BasicIdentity.
• SsoIdentity allows an SSO solution to be used to authenticate the user. You can use SsoIdentity
class when the service that you are requesting is accessing only one repository, or if the user
credentials in all repositories are identical. Because SsoIdentity does not contain repository
information, the username and password is authenticated against the designated global registry.
If there is no global registry defined, authentication fails.
• BinaryIdentity is used only in a local Java client and is not serializable. It is used to encapsulate
Kerberos credentials (see Kerberos authentication in a local DFS web application, page 206).
• PrincipalIdentity is used to indicate that DFC principal mode login should be used with the user
name provided in the identity instance. PrincipalIdentity is not XML serializable, so it will not
be sent over the wire. For security reasons, it will work only when the DFS service is invoked
in local mode.
Locale
The locale property of an IServiceContext object specifies the language and optionally country
setting to use for locale-sensitive features. The locale is used, for example, to control which
NLS-specific Data Dictionary strings will be provided by Content Server to the DFS layer. The
format of the locale string value is based on Java locale strings, which in turn are based on
ISO 639-1 two-character, lowercase language codes and ISO 3166 country codes. The format
of a Java locale string is <languagecode>[_<countrycode>]; for example, the Java locale string
for British English is "en_GB". Further information on Java locale strings can be found at
http://java.sun.com/developer/technicalArticles/J2SE/locale/.
To find out the locale codes currently supported by Content Server at installation, refer to the
documentation for the locale_name property of the dm_server_config object in the EMC Documentum
System Object Reference Manual.
If the locale is not set in the service context, the DFS server runtime will use the value set in the DFS
server application. Typically this means that a DFS client (particularly a remote client) should set
the locale to the locale expected by the user, rather than relying on the value set on the server. The
locale setting used by the DFS server can be specified in the dfc.locale property of dfc.properties. If
the value is not set in the service context by the client and not set on the server, the DFS server will
use the locale of the JVM in which it is running.
Transaction support
DFS provides basic support for transactions. If transactions are enabled operations will use implicit
transactional behavior; that is, they will begin the transaction at the start of the operation and commit
the transaction at the end of the operation if it completes successfully. If the any part of the operation
fails, the entire operation will be rolled back.
To enable transactions, set the USER_TRANSACTION_HINT runtime property in the service context
to IServiceContext.TRANSACTION_REQUIRED.
The expected behavior is that the payload policy must be honored first, then the transaction policy.
For example, suppose that we use the Object service to create objects based on a DataPackage that has
two DataObject trees. We use PAYLOAD_CONTINUE_ON_EXCEPTION with transaction support
to create the objects. At runtime, a leaf in the first DataObject tree fails and all others succeed. In
this case only the objects in the second DataObject tree would be created; the creation of the first
DataObject tree would be rolled back. If no transaction support were used, some leaves from the first
DataObject tree would be created, as well as the entire second DataObject tree.
OperationOptions
DFS services generally take an OperationOptions object as the final argument when calling a service
operation. OperationOptions contains profiles and properties that specify behaviors for the operation.
The properties have no overlap with properties set in the service context’s RuntimeProperties. The
profiles can potentially overlap with properties stored in the service context. In the case that they do
overlap, the profiles in OperationOptions always take precedence over profiles stored in the service
context. The profiles stored in the service context take effect when no matching profile is stored in the
OperationOptions for a specific operation. The override of profiles in the service context takes place
on a profile-by-profile basis: there is no merge of specific settings stored within the profiles.
As a recommended practice, a service client should avoid storing profiling information or properties
in the service operation that are likely to be modified by specific service instances. This avoids
possible side-effects caused by modifications to a service context shared by multiple services. It is
likely that ContentTransferProfile will not change and so should be included in the service context.
Other profiles are better passed within OperationOptions.
OperationOptions are discussed in more detail under the documentation for specific service
operations. For more information on core profiles, see PropertyProfile, page 93, ContentProfile, page
97, PermissionProfile, page 101, and RelationshipProfile, page 110. Other profiles are covered under
specific services in the Enterprise Content Services Reference.
{
Console.WriteLine(exx.StackTrace);
Assert.Fail("Unexpected Exception: " + exx.GetType().ToString());
}
}
To obtain detailed information about the exception stack, examine the Detail.Nodes property
of the exception, which is an array of XmlNode objects. The following shows a node from the
SerializableException instance caught in the preceding example:
<exceptionBean>
<attribute>
<name>messageId</name>
<type>java.lang.String</type>
<value>E_CREATING_OBJECT_TREE_FAILED_WITH_TREE_DUMP</value>
</attribute>
<exceptionClass>com.emc.documentum.fs.services.core.CoreServiceException
</exceptionClass>
<genericType>java.lang.Exception</genericType>
<message>Create operation failed for object:[id =null] PROPERTIES</message>
<messageId>E_CREATING_OBJECT_TREE_FAILED_WITH_TREE_DUMP</messageId>
</exceptionBean>
The DFS data model comprises the object model for data passed to and returned by Enterprise
Content Services. This chapter covers fundamental aspects of the data model and important
concepts related to it. This chapter is a supplement to the API documentation, which provides more
comprehensive coverage of DFS classes. This topic covers the following topics:
• DataPackage, page 81
• DataObject, page 82
• ObjectIdentity, page 84
• Property, page 87
• Content model and profiles, page 95
• Permissions, page 100
• Relationship, page 102
• Other classes related to DataObject, page 117
DataPackage
The DataPackage class defines the fundamental unit of information that contains data passed to and
returned by services operating in the DFS framework. A DataPackage is a collection of DataObject
instances, which is typically passed to, and returned by, Object service operations such as create,
get, and update. Object service operations process all the DataObject instances in the DataPackage
sequentially.
DataPackage example
The following sample instantiates, populates, and iterates through a data package.
//build list and then set the DataPackage contents to the list
ArrayList<DataObject> dataObjectList = new ArrayList<DataObject>();
dataObjectList.add(dataObject);
dataObjectList.add(dataObject1);
dataPackage.setDataObjects(dataObjectList);
DataObject
A DataObject is a representation of an object in an ECM repository. In the context of EMC
Documentum technology, the DataObject functions as a DFS representation of a persistent repository
object, such as a dm_sysobject or dm_user. Enterprise Content Services (such as the Object service)
consistently process DataObject instances as representations of persistent repository objects.
A DataObject instance is potentially large and complex, and much of the work in DFS service
consumers will be dedicated to constructing the DataObject instances. A DataObject can potentially
contain comprehensive information about the repository object that it represents, including its
identity, properties, content, and its relationships to other repository objects. In addition, the
DataObject instance may contain settings that instruct the services about how the client wishes parts of
the DataObject to be processed. The complexity of the DataObject and related parts of the data model,
such as Profile classes, are design features that enable and encourage simplicity of the service interface
and the packaging of complex consumer requests into a minimal number of service interactions.
For the same reason DataObject instances are consistently passed to and returned by services in
simple collections defined by the DataPackage class, permitting processing of multiple DataObject
instances in a single service interaction.
Class Description
ObjectIdentity An ObjectIdentity uniquely identifies the repository object referenced by
the DataObject. A DataObject can have 0 or 1 identities. For more details
see ObjectIdentity, page 84.
PropertySet A PropertySet is a collection of named properties, which correspond to the
properties of a repository object represented by the DataObject. A DataObject
can have 0 or 1 PropertySet instances. For more information see Property,
page 87.
Content Content objects contain data about file content associated with the data object.
A DataObject can contain 0 or more Content instances. A DataObject without
content is referred to as a "contentless DataObject." For more information see
Content model and profiles, page 95.
Permission A Permission object specifies a specific basic or extended permission, or a
custom permission. A DataObject can contain 0 or more Permission objects.
For more information see Permissions, page 100
Relationship A Relationship object defines a relationship between the repository object
represented by the DataObject and another repository object. A DataObject
can contain 0 or more Relationship instances. For more information, see
Relationship, page 102.
Aspect The Aspect class models an aspect that can be attached to, or detached from, a
persistent repository object.
DataObject type
A DataObject instance in normal DFS usage corresponds to a typed object defined in the repository.
The type is specified in the type setting of the DataObject using the type name defined in the
repository (for example dm_sysobject or dm_user). If the type is not specified, services will use an
implied type, which is dm_document.
DataObject construction
The construction of DataObject instances will be a constant theme in examples of service usage
throughout this document. The following typical example instantiates a DataObject, sets some of its
properties, and assigns it some content. Note that because this is a new DataObject, only a repository
name is specified in its ObjectIdentity.
ObjectIdentity
The function of the ObjectIdentity class is to uniquely identify a repository object. An ObjectIdentity
instance contains a repository name and an identifier that can take various forms, described in the
following table listing the ValueType enum constants.
ValueType Description
OBJECT_ID Identifier value is of type ObjectId, which is a container for the value of a
repository r_object_id attribute, a value generated by Content Server to
uniquely identify a specific version of a repository object.
OBJECT_PATH Identifier value is of type ObjectPath, which contains a String expression
specifying the path to the object, excluding the repository name. For
example /MyCabinet/MyFolder/MyDocument.
QUALIFICATION Identifier value is of type Qualification, which can take the form of a DQL
expression fragment. The Qualification is intended to uniquely identify a
Content Server object.
OBJECT_KEY Identifier value is of type ObjectKey, which contains a PropertySet,
the properties of which, joined by logical AND, uniquely identity the
repository object.
When constructing a DataObject to pass to the create operation, or in any case when the DataObject
represents a repository object that does not yet exist, the ObjectIdentity need only be populated
with a repository name. If the ObjectIdentity does contain a unique identifier, it must represent
an existing repository object.
Note that the ObjectIdentity class is generic in the Java client library, but non-generic in the .NET
client library.
ObjectId
An ObjectId is a container for the value of a repository r_object_id attribute, which is a value
generated by Content Server to uniquely identify a specific version of a repository object. An ObjectId
can therefore represent either a CURRENT or a non-CURRENT version of a repository object. DFS
services exhibit service- and operation-specific behaviors for handling non-CURRENT versions,
which are documented under individual services and operations.
ObjectPath
An ObjectPath contains a String expression specifying the path to a repository object, excluding
the repository name. For example /MyCabinet/MyFolder/MyDocument. An ObjectPath can only
represent the CURRENT version of a repository object. Using an ObjectPath does not guarantee the
uniqueness of the repository object, because Content Server does permit objects with identical names
to reside within the same folder. If the specified path is unique at request time, the path is recognized
as a valid object identity; otherwise, the DFS runtime will throw an exception.
Qualification
A Qualification is an object that specifies criteria for selecting a set of repository objects. Qualifications
used in ObjectIdentity instances are intended to specify a single repository object. The criteria set in
the qualification is expressed as a fragment of a DQL SELECT statement, consisting of the expression
string following "SELECT FROM", as shown in the following example.
Qualification qualification =
new Qualification("dm_document where object_name = 'dfs_sample_image'");
DFS services use normal DQL statement processing, which selects the CURRENT version of an
object if the ALL keyword is not used in the DQL WHERE clause. The preceding example (which
assumes for simplicity that the object_name is sufficient to ensure uniqueness) will select only the
CURRENT version of the object named dfs_sample_image. To select a specific non-CURRENT
version, the Qualification must use the ALL keyword, as well as specific criteria for identifying
the version, such as a symbolic version label:
String nonCurrentQual = "dm_document (ALL) " +
"where object_name = 'dfs_sample_image' " +
"and ANY r_version_label = 'test_version'";
Qualification<String> qual = new Qualification<String>(nonCurrentQual);
// repository only is required to represent an object that has not been created
objectIdentities[0] = new ObjectIdentity(repName);
Qualification qualification
= new Qualification("dm_document where r_object_id = '090007d280075180'");
objectIdentities[2] = new ObjectIdentity<Qualification>(qualification, repName);
// repository only is required to represent an object that has not been created
objectIdentities[0] = new ObjectIdentity(repName);
ObjectIdentitySet
An ObjectIdentitySet is a collection of ObjectIdentity instances, which can be passed to an Object
service operation so that it can process multiple repository objects in a single service interaction. An
ObjectIdentitySet is analogous to a DataPackage, but is passed to service operations such as move,
copy, and delete that operate only against existing repository data, and which therefore do not require
any data from the consumer about the repository objects other than their identity.
ObjectIdentitySet example
Qualification qualification =
new Qualification("dm_document where object_name = 'bl_upwind.gif'");
objIdSet.addIdentity(new ObjectIdentity(qualification, repName));
Qualification qualification
= new Qualification("dm_document where object_name =
'bl_upwind.gif'");
objIdSet.AddIdentity(new ObjectIdentity(qualification, repName));
Property
A DataObject optionally contains a PropertySet, which is a container for a set of Property objects.
Each Property in normal usage corresponds to a property (also called attribute) of a repository object
represented by the DataObject. A Property object can represent a single property, or an array of
properties of the same data type. Property arrays are represented by subclasses of ArrayProperty,
and correspond to repeating attributes of repository objects.
Property model
The Property class is subclassed by data type (for example StringProperty), and each subtype has a
corresponding class containing an array of the same data type, extending the intermediate abstract
class ArrayProperty (see Figure 4, page 88).
new StringArrayProperty("keywords",
new String[]{"lions", "tigers", "bears"}),
new NumberArrayProperty("my_number_array", (short) 1, 10, 100L, 10.10),
new BooleanArrayProperty("my_boolean_array", true, false, true, false),
new DateArrayProperty("my_date_array", new Date(), new Date()),
new ObjectIdArrayProperty("my_obj_id_array",
new ObjectId("0c0007d280000107"), new ObjectId("090007d280075180")),
};
Transient properties
Transient properties are custom Property objects that are not interpreted by the services as
representations of persistent properties of repository objects. You can therefore use transient
properties to pass your own data to a service to be used for a purpose other than setting attributes
on repository objects.
Note: Currently transient properties are implemented only by the Object Service validate operation.
To indicate that a Property is transient, set the isTransient property of the Property object to true.
One intended application of transient properties implemented by the services is to provide the client
the ability to uniquely identify DataObject instances passed in a validate operation, when the instances
have not been assigned a unique ObjectIdentity. The validate operation returns a ValidationInfoSet
property, which contains information about any DataObject instances that failed validation. If the
service client has populated a transient property of each DataObject with a unique identifier, the client
will be able to determine which DataObject failed validation by examining the ValidationInfoSet.
The following sample would catch a ValidationException and print a custom id property for each
failed DataObject to the console.
while (items.hasNext())
{
Property property = (Property) items.next();
{
System.out.println(property.getClass().getName() +
" = " + property.getValueAsString());
}
}
The NumberProperty class stores its value as a java.lang.Number, which will be instantiated as a
concrete numeric type such as Short or Long. Setting this value unambiguously, as demonstrated
in the preceding sample code (for example 10L or (short)10), determines how the value will be
serialized in the XML instance and received by a service. The following schema shows the numeric
types that can be serialized as a NumberProperty:
<xs:complexType name="NumberProperty">
<xs:complexContent>
<xs:extension base="xscp:Property">
<xs:sequence>
<xs:choice minOccurs="0">
<xs:element name="Short" type="xs:short"/>
<xs:element name="Integer" type="xs:int"/>
<xs:element name="Long" type="xs:long"/>
<xs:element name="Double" type="xs:double"/>
</xs:choice>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
ArrayProperty
The subclasses of ArrayProperty each contain an array of Property objects of a specific subclass
corresponding to a data type. For example, the NumberArrayProperty class contains an array of
NumberProperty. The array corresponds to a repeating attribute (also known as repeating property)
of a repository object.
ValueAction
The following table describes how the ValueActionType values are interpreted by an update
operation.
When using a ValueAction to delete a repeating attribute value, the value stored at position
ArrayProperty[p], corresponding to ValueAction[p] is not relevant to the operation. However, the two
arrays must still line up. In this case, you should store an empty (dummy) value in ArrayProperty[p]
(such as the empty string ""), rather than null.
PropertySet
A PropertySet is a container for named Property objects, which typically (but do not necessarily)
correspond to persistent repository object properties.
You can restrict the size of a PropertySet returned by a service using the filtering mechanism of the
PropertyProfile class (see PropertyProfile, page 93).
PropertySet example
PropertyProfile
A PropertyProfile defines property filters that limit the properties returned with an object by a
service. This allows you to optimize the service by returning only those properties that your service
consumer requires. PropertyProfile, like other profiles, is generally set in the OperationOptions
passed to a service operation (or it can be set in the service context).
You specify how PropertyProfile filters returned properties by setting its PropertyFilterMode. The
following table describes the PropertyProfile filter settings:
PropertyFilterMode Description
NONE No properties are returned in the PropertySet. Other settings
are ignored.
SPECIFIED_BY_INCLUDE No properties are returned unless specified in the
includeProperties list.
SPECIFIED_BY_EXCLUDE All properties are returned unless specified in the
excludeProperties list.
ALL_NON_SYSTEM Returns all properties except system properties.
ALL All properties are returned.
If the PropertyFilterMode is SPECIFIED_BY_INCLUDE, you can use processIncludedUnknown
property of the PropertyFilter to control whether to process any property in the includedProperties
list that is not a property of the repository type. If processIncludedUnknown is false, DFS ignore any
such property specified in the includeProperties list. The default value of processIncludedUnknown
is false.
Updates to system properties during an update or checkin can produce unexpected results and
should be avoided unless you explicitly intend to change a system property. The update and checkin
operations (and other operations as well) will attempt to update any properties that are populated in
a DataObject provided by the operation. These properties can only be modified by a superuser, so the
attempt will generally result in a permissions error. If the user making the update is a superuser,
unintended changes to system properties may cause side effects.
When you initially populate the properties of the DataObject (for example, using the result of an
Object service get or create operation), avoid setting the PropertyFilterMode to ALL, if you plan
to pass the result into a checkin or update operation. Instead, you can set the property filter to
ALL_NON_SYSTEM. (The default is operation-specific, but this is generally the default setting
for Object service get and similar operations.)
If you do need to modify a system property, you should strip other system properties from the
DataObject prior to the update.
PropertyProfile example
Content model
The DFS content model provides a content type corresponding to each support method of content
transfer. The following diagram shows the model as defined in the DFS WSDLs.
A DataObject contains a Contents collection, in which each Content instance can represent the
DataObject’s primary content, a rendition created by a user (RenditionType.CLIENT), or a rendition
created by Content Server (RenditionType.SERVER). A repository object can have only one primary
content object and zero or more renditions.
The BinaryContent type includes a Base64–encoded byte array and is typically used with the Base64
content transfer mode:
<xs:complexType name="BinaryContent">
<xs:complexContent>
<xs:extension base="tns:Content">
<xs:sequence>
<xs:element name="Value" type="xs:base64Binary"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
In the DFS content model, MTOM data is typically represented by the DataHandlerContent type,
which is defined as follows:
<xs:complexType name="DataHandlerContent">
<xs:complexContent>
<xs:extension base="tns:Content">
<xs:sequence>
<xs:element name="Value"
ns1:expectedContentTypes="*/*"
type="xs:base64Binary"
xmlns:ns1="http://www.w3.org/2005/05/xmlmime"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
The UrlContent type includes a string representing the location of a content resource. This URL is
used to download content from a repository through an Acclerated Content Services (ACS) server or
associated BOCS cache. The ACS URL is set to expire after a period that is configurable on the ACS
server (the default setting is 6 hours), so they are not suitable for long-term storage and reuse.
<xs:complexType name="UrlContent">
<xs:complexContent>
<xs:extension base="tns:Content">
<xs:sequence/>
<xs:attribute name="url" type="xs:string" use="required"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
The DFS client productivity layer includes an additional class, FileContent, which is used as a
convenience class for managing content files. FileContent is also the primary type returned to the
productivity layer by services invoked in local mode.
ContentProfile
The ContentProfile class enables a client to set filters that control the content returned by a service.
This has important ramifications for service performance, because it permits fine control over
expensive content transfer operations.
ContentProfile includes three types of filters: FormatFilter, PageFilter, and PageModifierFilter. For
each of these filters there is a corresponding variable that is used or ignored depending on the filter
settings. For example, if the FormatFilter value is FormatFilter.SPECIFIED, the service will return
content that has a format specified by the ContentProfile.format property. Each property corresponds
to a setting in the dmr_content object that represents the content in the repository.
The following table describes the ContentProfile filter settings:
postTransferAction
You can set the postTransferAction property of a ContentProfile instance to open a document
downloaded by UCF for viewing or editing.
• To open the document for edit, ensure the document is checked out before the UCF content
transfer.
• If the document has not been checked out from the repository, you can open the document for
viewing it (as read-only).
contentReturnType
ContentProfile example
The following sample sets a ContentProfile in operationOptions. The ContentProfile will instruct
the service to exclude all content from each returned DataObject.
ContentProfile contentProfile = new ContentProfile();
contentProfile.setFormatFilter(FormatFilter.ANY);
OperationOptions operationOptions = new OperationOptions();
operationOptions.setContentProfile(contentProfile);
ContentTransferProfile
Settings in the ContentTransferProfile class determine the mode of content transfer, and also specify
behaviors related to content transfer in a distributed environment. Distributed content transfer can
take place when DFS delegates the content transfer to UCF, or when content is downloaded from an
ACS server or BOCS cache using a UrlContent object.
Permissions
A DataObject contains a list of Permission objects, which together represent the permissions of the
user who has logged into the repository on the repository object represented by the DataObject. The
intent of the Permission list is to provide the client with read access to the current user’s permissions
on a repository object. The client cannot set or update permissions on a repository object by
modifying the Permission list and updating the DataObject. To actually change the permissions, the
client would need to modify or replace the repository object’s permission set (also called an Access
Control List, or ACL).
Each Permission has a permissionType property can be set to BASIC, EXTENDED, or CUSTOM.
BASIC permissions are compound (sometimes called hierarchical), meaning that there are levels of
permission, with each level including all lower-level permissions. For example, if a user has RELATE
permissions on an object, the user is also granted READ and BROWSE permissions. This principle
does not apply to extended permissions, which have to be granted individually.
The following table shows the PermissionType enum constants and Permission constants:
Note: The granted property of a Permission is reserved for future use to designate whether
a Permission is explicitly not granted, that is to say, whether it is explicitly denied. In EMC
Documentum 6, only granted permissions are returned by services.
PermissionProfile
The PermissionProfile class enables the client to set filters that control the contents of the Permission
lists in DataObject instances returned by services. By default, services return an empty Permission
list: the client must explicitly request in a PermissionProfile that permissions be returned.
The ContentProfile includes a single filter, PermissionTypeFilter, with a corresponding
permissionType setting that is used or ignored depending on the PermissionTypeFilter value. The
permissionType is specified with a Permission.PermissionType enum constant.
The following table describes the permission profile filter settings:
Content Server BASIC permissions are compound (sometimes called hierarchical), meaning that
there are conceptual levels of permission, with each level including all lower-level permissions. For
example, if a user has RELATE permissions on an object, the user is also implicitly granted READ
and BROWSE permissions on the object. This is a convenience for permission management, but it
complicates the job of a service consumer that needs to determine what permissions a user has on
an object.
The PermissionProfile class includes a useCompoundPermissions setting with a default value of
false. This causes any permissions list returned by a service to include all BASIC permissions on
an object. For example, if a user has RELATE permissions on the object, a Permissions list would
be returned containing three BASIC permissions: RELATE, READ, and BROWSE. You can set
useCompoundPermissions to true if you only need the highest-level BASIC permission.
PermissionProfile example
The following example sets a PermissionProfile in operationOptions, specifying that all permissions
are to be returned by the service.
PermissionProfile permissionProfile = new PermissionProfile();
permissionProfile.setPermissionTypeFilter(PermissionTypeFilter.ANY);
OperationOptions operationOptions = new OperationOptions();
operationOptions.setPermissionProfile(permissionProfile);
Relationship
Relationships allow the client to construct a single DataObject that specifies all of its relationships
to other objects, existing and new, and to get, update, or create the entire set of objects and their
relationships in a single service interaction.
The Relationship class and its subclasses, ObjectRelationship and ReferenceRelationship, define
the relationship that a repository object (represented by a DataObject instance) has, or is intended
to have, to another object in the repository (represented within the Relationship instance). The
repository defines object relationships using different constructs, including generic relationship types
represented by hardcoded strings (folder and virtual_document) and dm_relation objects, which
contain references to dm_relation_type objects. The DFS Relationship object provides an abstraction
for dealing with various metadata representations in a uniform manner.
This document will use the term container DataObject when speaking of the DataObject that contains a
Relationship. It will use the term target object to refer to the object specified within the Relationship.
Each Relationship instance defines a relationship between a container DataObject and a target
object. In the case of the ReferenceRelationship subclass, the target object is represented by an
ObjectIdentity; in the case of an ObjectRelationship subclass, the target object is represented by a
DataObject. Relationship instances can therefore be nested, allowing the construction of complex
DataObject graphs.
There are important restrictions to be aware of when retrieving a data graph—see Restrictions when
retrieving deep relationships, page 111.
Relationship model
Figure 8, page 103 shows the model of Relationship and related classes.
Relationship properties
RelationshipIntentModifier
The following table describes the possible values for the RelationshipIntentModifier.
IntentModifier Description
value
ADD Specifies that the relation should be added by an update operation if it
does not exist, or updated if it does exist. This is the default value: the
intentModifier of any Relationship is implicitly ADD if it is not explicitly
set to REMOVE.
REMOVE This setting specifies that a relationship should be removed by an update
operation.
Relationship targetRole
Relationships are directional, having a notion of source and target. The targetRole of a Relationship
is a string representing the role of the target in a relationship. In the case of folders and VDMs, the
role of a participant in the relationship can be parent or child. The following table describes the
possible values for the Relationship targetRole.
The order of branching is determined not by hierarchy of parent-child relationships, but by the
nesting of Relationship instances within DataObject instances. In some service processing it may
be useful to reorder the graph into a tree based on parent-child hierarchy. Some services do this
reordering and parse the tree from the root of the transformed structure.
Standalone DataObject
A DataObject with references models a repository object (new or existing) with relationships to
existing repository objects. References to the existing objects are specified using objects of class
ObjectIdentity.
As an example, consider the case of a document linked into two folders. The DataObject representing
the document would need two ReferenceRelationship instances representing dm_folder objects in
the repository. The relationships to the references are directional: from parent to child. The folders
must exist in the repository for the references to be valid. Figure 11, page 106 represents an object of
this type.
To create this object with references you could write code that does the following:
1. Create a new DataObject: doc1.
2. Add to doc1 a ReferenceRelationship to folder1 with a targetRole of "parent".
3. Add to doc1 a ReferenceRelationship to folder2 with a targetRole of "parent".
In most cases the client would know the ObjectId of each folder, but in some cases the ObjectIdentity
can be provided using a Qualification, which would eliminate a remote query to look up the folder ID.
Let’s look at a slightly different example of an object with references (Figure 12, page 107). In this
case we want to model a new folder within an existing folder and link an existing document into
the new folder.
To create this DataObject with references you could write code that does the following:
1. Create a new DataObject: folder1.
2. Add to folder1 a ReferenceRelationship to folder2 with a targetRole of "parent".
3. Add to folder1 a ReferenceRelationship to doc1 with a targetRole of "child".
In many cases it is relatively efficient to create a complete hierarchy of objects and then create or
update it in the repository in a single service interaction. This can be accomplished using a compound
DataObject, which is a DataObject containing ObjectRelationship instances.
A typical case for using a compound DataObject would be to replicate a file system’s folder hierarchy
in the repository. Figure 13, page 108 represents an object of this type.
To create this compound DataObject you could write code that does the following:
1. Create a new DataObject, folder 1.
2. Add to folder 1 an ObjectRelationship to a new DataObject, folder 1.1, with a targetRole of "child".
3. Add to folder 1.1 an ObjectRelationship to a new DataObject, folder 1.1.1, with a targetRole
of "child".
4. Add to folder 1.1 an ObjectRelationship to a new DataObject, folder 1.1.2, with a targetRole
of "child".
5. Add to folder 1 an ObjectRelationship to a new DataObject, folder 1.2, with a targetRole of "child".
In this logic there is a new DataObject created for every node and attached to a containing DataObject
using a child ObjectRelationship.
In a normal case of object creation, the new object will be linked into one or more folders. This
means that a compound object will also normally include at least one ReferenceRelationship. Figure
14, page 109 shows a compound data object representing a folder structure with a reference to an
existing folder into which to link the new structure.
To create this compound DataObject you could write code that does the following:
1. Create a new DataObject, folder 1.
2. Add to folder 1 an ObjectRelationship to a new DataObject, folder 1.1, with a targetRole of "child".
3. Add to folder 1.1 an ObjectRelationship to a new DataObject, folder 1.1.1, with a targetRole
of "child".
4. Add to folder 1.1 an ObjectRelationship to a new DataObject, folder 1.1.2, with a targetRole
of "child".
5. Add to folder 1 a ReferenceRelationship to an existing folder 1.2, with a targetRole of "parent".
The preceding diagram shows that a new PARENT relation to folder 3 is added to folder 1, and an
existing relation with folder 2 is removed. This has the effect of linking folder1 into folder3 and
removing it from folder2. The folder2 object is not deleted.
To configure the data object you would:
1. Create a new DataObject, folder1.
2. Add to folder1 a ReferenceRelationship to folder2, with an intentModifier set to REMOVE.
3. Add to folder1 a ReferenceRelationship to folder3, with a targetRole of "parent".
RelationshipProfile
A RelationshipProfile is a client optimization mechanism that provides fine control over the size and
complexity of DataObject instances returned by services. By default, the Object service get operation
returns DataObject containing no Relationship instances. To alter this behavior, you must provide a
RelationshipProfile that explicit sets the types of Relationship instances to return.
ResultDataMode
Note that if resultDataMode is set to REFERENCE, the depth of relationships retrieved can be no
greater than 1. This is because the related objects retrieved will be in the form of an ObjectIdentity,
and so cannot nest any Relationship instances.
Relationship filters
RelationshipProfile includes a number of filters that can be used to specify which categories of
Relationship instances are returned as part of a DataObject. For some of the filters you will need to
specify the setting in a separate property and set the filter to SPECIFIED. For example, to filter
by relationName, set nameFilter to SPECIFIED, and use the relationName property to specify the
relationship name string.
The filters are ANDed together to specify the conditions for inclusion of a Relationship instance. For
example, if targetRoleFilter is set to RelationshipProfile.ROLE_CHILD and depthFilter is set to 1,
only proximate child relationships will be returned by the service.
The following table describes the filters and their settings.
However, relationships more than one step removed from the primary DataObject (where depth > 1)
will be returned in a relationship graph only if they have the same relationship name and targetRole
as the first relationship on the branch. Let’s look at a couple of examples of how this works. In all of
the examples we will assume the following settings in the RelationshipProfile:
resultDataMode = ResultDataMode.OBJECT
targetRoleFile = TargetRoleFilter.ANY
nameFilelter = RelationshipNameFilter.ANY
depthFilter = DepthFilter.UNLIMITED
Note that to retrieve any deep relationships resultDataMode must equal ResultDataMode.OBJECT.
The following code retrieves a DataObject with the preceding settings:
return dataPackage.getDataObjects().get(0);
}
Let’s start with a case where all relationships have the same relationship name (folder).
The primary object in this case is folder_1.2. As you can see, both of its proximate relationships are
retrieved. On the child branch the deep relationship (to folder_1.2.1.1) is retrieved, because both the
name and targetRole of the deep relationship is the same as the first relationship on the branch.
However, on the parent branch, the relationship to folder_1.1 is not retrieved, because the targetRole
of the relationship to folder_1.1 (child) is not the same as the targetRole of the first relationship on
the branch (parent).
Let’s look at another example where the relationship name changes, rather than the targetRole. In this
example, we want to retrieve the relationships of a folder that has two child folders. Each child folder
contains a document, and one of the documents is a virtual document that contains the other.
As before, both proximate relationships are retrieved. The deep folder relationships to the documents
are also retrieved. But the virtual_document relationship is not retrieved, because its relationship
name (virtual_document) is not the same as the name of the first relationship on the branch (folder).
Custom relationships
It is sometimes useful to create a custom relationship by extending the dm_relation object, which
allows you to add custom properties to the relationship. You can extend the dm_relation object
independently of defining a custom dm_relation_type object. To extend dm_relation, you could use
Composer, or you could use a DQL similar to the following:
CREATE TYPE acme_custom_relation (str_attr string(32),
bool_attr boolean,
repeat_attr string(32) REPEATING)
WITH SUPERTYPE dm_relation PUBLISH
You can reference a custom relationship in the name property of a DFS Relationship object using
the syntax:
<dm_relation_type name>/<dm_relation subtype name>
Let’s look at an example of how you might use such an extended relationship. Suppose you wanted to
create a custom object type called acme_geoloc to contain geographic place names and locations that
can be used to display positions in maps. This geoloc object contains properties such as place name,
latitude, and longitude. You want to be able to associate various documents, such as raster maps,
tour guides, and hotel brochures with an acme_geoloc object. Finally, you also want to be able to
capture metadata about the relationship itself.
To enable this, you could start by making the following modifications in the repository using
Composer:
• Create an acme_geoloc type (with no supertype), with properties "name", "latitude", and
"longitude".
• Create an instance of dm_relation_type, which you might call acme_geoloc_relation_type. In
this instance, set the parent_type property to "dm_document" and the child_type property to
"acme_geoloc".
• Create a subtype of dm_relation called acme_geoloc_relation. Add a couple attributes to this type
to track metadata about the relationship: rel_class (string) and is_validated (boolean).
Once these objects are created in the repository, your application can create relationships at runtime
between document (dm_document) objects and acme_geoloc objects. By including the relationship in
DataObject instances, your client application can choose to include geolocation information about the
document for display in maps, and also examine custom metadata about the relationship itself. The
following Java sample code creates an acme_geoloc object, a document, and a relationship of type
acme_geoloc_relation_type between the document and the acme_geoloc.
// define a document
DataObject docDataObj = new DataObject(new ObjectIdentity
(defaultRepositoryName),"dm_document");
PropertySet docProperties = new PropertySet();
docProperties.set("object_name", "T-Eiffel");
docProperties.set("title", "Guide to the Eiffel Tower");
docDataObj.setProperties(docProperties);
propertyProfile.setFilterMode(PropertyFilterMode.ALL_NON_SYSTEM);
Aspect
The Aspect class models an aspect, and provides a means of attaching an aspect to a persistent object,
or detaching an aspect from a persistent object during a service operation.
Aspects are a mechanism for adding behavior and/or attributes to a Documentum object instance
without changing its type definition. They are similar to TBOs, but they are not associated with
any one document type. Aspects also are late-bound rather than early-bound objects, so they can
be added to an object or removed as needed.
Aspects are a BOF type (dmc_aspect_type). Like other BOF types, they have these characteristics:
• Aspects are installed into a repository.
• Aspects are downloaded on demand and cached on the local file system.
• When the code changes in the repository, aspects are automatically detected and new code is “hot
deployed” to the DFC (and therefore DFS) client.
This chapter is intended to introduce you to writing custom services in the DFS framework and
how to use the DFS SDK build tools to generate a deployable EAR file. Sample custom services are
also provided to get you started on developing your own custom services with DFS. This chapter
contains the following sections:
• Service design considerations, page 119
• The well-behaved service implementation, page 121
• Creating a custom service with the DFS SDK build tools, page 123
• Annotating a service, page 124
• Service namespace generation, page 129
• DFS exception handling, page 130
• Defining the service address, page 133
• Building and packaging a service into an EAR file, page 134
• Exploring the Hello World service, page 134
• Exploring AcmeCustomService, page 136
• Chapter 9, The DFS Build Tools
import com.emc.documentum.fs.rt.context.DfcSessionManager;
...
World service, page 134 and Exploring AcmeCustomService, page 136. To create a custom service
with the DFS SDK build tools:
1. Create a service class and annotate it correctly as described in Annotating a service, page 124. The
class must be annotated for the DFS SDK build tools to correctly build and package the service.
2. Determine if you want the service namespace to be automatically generated or if you want
to specify the service namespace explicitly. Fore more information, see Service namespace
generation, page 129.
3. Implement your service by using the principles that are described in The well-behaved service
implementation, page 121. Refer to DFS exception handling, page 130 if you need guidance
on creating and handling custom exceptions.
4. Define where you want the service to be addressable at, which is described in Defining the
service address, page 133.
5. Build and package your service with the DFS SDK build tools as described in .Building and
packaging a service into an EAR file, page 134.
Annotating a service
Class annotation
DFS specifies two Java annotations that you must annotate your service class with so the DFS SDK
build tools know how to build and package your service. The annotations, @DfsBofService and
@DfsPojoService, are defined in the package com.emc.documentum.fs.rt.annotations. To annotate
a service class, insert the @DfsPojoService or @ DfsBofService annotation immediately above the
service class declaration.
import com.emc.documentum.fs.rt.annotations.DfsPojoService;
@DfsPojoService()
public class AcmeCustomService implements IAcmeCustomService
{
// service implementation
}
@DfsBofService()
public class MySBO extends DfService implements IMySBO
{
//SBO service implementation
}
The annotation attributes, described in the following tables, provide overrides to default DFS SDK
build tools behavior.
Attribute Description
serviceName The name of the service. Required to be non-empty.
targetNamespace Overrides the default Java-package-to-XML-namespace conversion
algorithm. Optional.
requiresAuthentication When set to "false", specifies that this is an open service, requiring no
user authentication. Default value is "true".
useDeprecatedExcep- Set to true if you want to maintain backwards compatibility with the
tionModel DFS 6.0 SP1 or earlier exception model. Default value is “false”.
Attribute Description
implementation Name of implementation class. Required to be non-empty if the
annotation applies to an interface declaration.
targetNamespace Overrides the default Java-package-to-XML-namespace conversion
algorithm. Optional.
targetPackage Overrides the default Java packaging algorithm. Optional.
requiresAuthentication When set to "false", specifies that this is an open service, requiring no
user authentication. Optional; default value is "true".
useDeprecatedExcep- Set to true if you want to maintain backwards compatibility with the
tionModel DFS 6.0 SP1 or earlier exception model. Default value is “false”.
Note: Although DFS leverages JAX-WS, it does not support JSR-181 annotations. This is due to
the difference in emphasis between DFS (service orientation approach) and JAX-WS (web service
implementation). DFS promotes an XML-based service model and adapts JAX-WS tools (specifically
wsgen and wsimport) to this service model.
import javax.xml.bind.annotation.*;
import java.util.List;
return isSessionPoolingActive;
}
@XmlElement(name = "Repositories")
private List repositories;
@XmlAttribute
private boolean isSessionPoolingActive;
@XmlAttribute
private boolean hasActiveSessions;
@XmlAttribute
private String defaultSchema;
}
When annotating data type classes, the following annotations are recommended:
• @XmlType:
@XmlType(name = "AcmeServiceInfo",
namespace = "http://common.samples.services.acme.com/")
When naming fields and accessors, the following conventions are recommended:
• With naming lists and arrays, use plurals; for example:
String value
List<String> values
• As a basic requirement of Javabeans and general Java convention, a field’s accessors (getters and
setters) should incorporate the exact field name. This leads to desired consistency between the
field name, method names, and the XML element name.
@XmlAttribute
private String defaultSchema;
• Annotate primitive and simple data types (int, boolean, long, String, Date) using @XmlAttribute.
• Annotate complex data types and lists using @XmlElement, for example:
@XmlElement(name = "Repositories")
private List repositories;
@XmlElement(name = "MyComplexType")
private MyComplexType myComplexTypeInstance;
Things to avoid
The following should be avoided when implementing classes that bind to XML types.
• Avoid exposing complex collections as an XML type, other than List<Type>. One-dimensional
arrays are also safe.
• Avoid adding significant behaviors to a type, other than convenience methods such as map
interfaces and constructors.
• Avoid use of the @XmlElements annotation. This annotation results in an <xsd:choice>, to which
inheritance is preferred. Annotate the base class with @XmlSeeAlso instead (see Data type
annotation, page 127).
The following conditions can also lead to problems either with the WSDL itself, or with .NET WSDL
import utilities.
• Use of the @XmlRootElement annotation can cause namespace problems with JAXB 2.1. As a
result, the .NET WSDL import utility may complain about "incompatibility of types."
• It is highly recommended that you always use the @XmlAccessorType(XmlAccessType.FIELD)
to annotate data type classes. If you use the default value for @XmlAccessType (which is
PROPERTY), the service generation tools will parse all methods beginning with "get" and "set",
which makes it difficult to control how the text following "get" and "set" is converted to XML. If
one then adds an explicit @XmlElement or @XmlAttribute on a field that already has a getter and
setter, the field is likely to be include more than once in the XML schema with slightly different
naming conventions.
• Exercise caution using the @XmlContent annotation. Not all types can support it. We recommend
using it only for representations of long strings.
ObjectIdentity objectIdentity1 =
dp1.getDataObjects().
get(0).getIdentity();
ObjectIdentity objectIdentity2 =
dp2.getDataObjects().
get(0).getIdentity();
System.out.println(“object created:
“ + objectIdentity1.
getValue().toString());
System.out.println(“object created:
“ + objectIdentity2.
getValue().toString());
}
You can override this namespace generation by specifying a value for the targetNamespace
attribute for the service annotation that you are using (@DfsPojoService or @DfsBofService). For
more information on overriding the target namespace for a service, see Overriding default service
namespace generation, page 130.
import com.emc.documentum.fs.rt.annotations.DfsPojoService;
@DfsPojoService(targetNamespace = http://samples.services.
acme.com)
public class AcmeCustomService
.
.
.
With this input, the DFS SDK build tools generate the service interface and other DFS artifacts n
the com.acme.services.samples.client package. It places the service implementation and other files
generated by JAX-WS in the com.acme.services.samples package. The service namespace would be
"http://samples.services.acme.com" as specified in the service annotation attribute.
Note: A conflict occurs when you have two services that have the following namespaces: http://a.b.c.d
and http://b.c.d/a In this case, when JAX-WS tries to generate the client proxies for these two services,
they will be generated in the same package (d.c.b.a), so you will only be able to call the first service in
the classpath. Avoid assigning namespaces in this way to prevent this situation.
1. Create a class for your custom exception with an appropriate constructor as described in the
requirements.
2. Add a field named exceptionBean of type List<DfsExceptionHolder> to the exception class.
Ensure accessor and mutator methods exist for this field.
3. Define the fields that you want the exception to contain. Ensure accessor and mutator methods
exist for these fields and that each field is JAXB serializable.
4. Write your service and throw your exception where needed.
The DFS runtime will receive the DfsExceptionHolder object and re-create and throw the exception
on the client side.
In dfs-runtime.properties:
resource.bundle = dfs-messages
resource.bundle.1 = dfs-services-messages
resource.bundle.2 = dfs-bpm-services-messages
In local-dfs-runtime.properties.
resource.bundle.3 = my-custom-services-messages
Note: A limitation of this approach is that if a new resource bundle is required for core
services in a future release, it would be defined as "resource.bundle.3" and the one defined in
local-dfs-runtime.properties would override it. If you define a custom resource bundle, be aware
that this could cause a future migration issue.
For more information see “DFS runtime property configuration” in the Documentum Foundation
Services Deployment Guide.
When instantiating a service, a Java client application can pass the module name and the
fully-qualified context root to ServiceFactory.getRemoteService, as shown here:
mySvc = serviceFactory.getRemoteService(IAcmeCustomService.class,
context,
"samples",
"http://localhost:7001/services");
Alternatively, the client can call an overloaded method of getRemoteService that does not include
the module and contextRoot parameters. In this case the client runtime obtains the module and
contextRoot values from the dfs-client.xml configuration file, which specifies default service
addressing values. The dfs-client.xml used by AcmeCustomService is located in resources\config.
Its contents are shown here:
<DfsClientConfig defaultModuleName="samples" registryProviderModuleName="samples">
<ModuleInfo name="samples"
protocol="http"
host="127.0.0.1"
port="7001"
contextRoot="services">
</ModuleInfo>
</DfsClientConfig>
The order of precedence is as follows. The DFS runtime will first use parameters passed in
the getRemoteService method. If these are not provided, it will use the values provided in the
DfsClientConfig configuration file.
1. Call the generateModel task, specifying as input the annotated source. For more information on
calling this task, see generateModel task, page 146
2. Call the generateArtifacts task, specifying as input the annotated source and service model. For
more information on calling this task, see generateArtifacts task, page 147
3. Call the buildService task to build and package JAR files for your service implementation classes.
For more information on calling this task, see buildService task, page 148
4. Call the packageServiceTask to package all source artifacts into a deployable EAR or WAR file.
For more information on calling this task, see packageService task, page 149
1. Extract the emc-dfs-sdk-6.7.zip file to a location of your choosing. This location will be referred
to as %DFS_SDK%. The root directory of the samples, %DFS_SDK%/samples will be referred
to as %SAMPLES_LOC%.
5. Edit the %DFS_SDK%/etc/dfc.properties file and specify, at a minimum, correct values for the
dfc.docbroker.host[0] and dfc.docbroker.port[0] properties. The dfc.properties file is packaged
with the service during the build.
The Hello World service requires access to the connection broker to authenticate the user. If the
service did not require authentication, you would not have to set values in the dfc.properties
file, because the Hello World service does not access a repository. Authentication is intentionally
set to true to demonstrate the steps necessary to build an authenticated service. As an optional
exercise, you can choose to change the annotation for the service and not require authentication
(requiresAuthentication=false). In this case, you do not have to specify anything in the
dfc.properties file, because the service never interacts with a connection broker.
6. From the command prompt, enter the following commands:
cd %SAMPLES_LOC%/Services/HelloWorldService
ant artifacts package
The %SAMPLES_LOC%/Services/HelloWorldService/build/example.ear file should appear after
the build is successful.
7. Copy the %SAMPLES_LOC%/Services/HelloWorldService/build/example.ear file to the
%DOCUMENTUM_HOME%\jboss4.2.0\server\DctmServer_DFS\deploy directory (default
deploy location for the DFS application server) or to wherever you deploy your web applications.
8. Restart the DFS application server. Once the server is restarted, the Hello World service should be
addressable at http://<host>:<port>/services/example/HelloWorldService.
You can test the deployment by typing http://<host>:<port>/services/example/HelloWorldService?wsdl
into a browser (replacing <host>:<port> with the correct domain).
Exploring AcmeCustomService
AcmeCustomService serves as a minimal example that demonstrates fundamental techniques that
you need to develop your own services. This section provides a brief tour of the AcmeCustomService
sample and shows you how to generate, deploy, and test the AcmeCustomService service.
AcmeCustomService is located in the %DFS_SDK%/samples/AcmeCustomService directory where
%DFS_SDK% is the location of your DFS SDK.
Overview of AcmeCustomService
The AcmeCustomService class displays basic concepts on how to write a custom DFS service. It gets a
DFC session manager to begin using the DFC API, invokes (chains in) a core DFS service from your
custom service (the Schema service), and populates an AcmeCustomInfo object with information that
is obtained from these two sources. For information on how the class for the AcmeCustomInfo object
is implemented and annotated, see Data type and field annotation, page 125 .The service also contains
an operation to test the custom exception handling functionality of DFS.
The getAcmeServiceInfo method gets a DFC session manager and populates the AcmeServiceInfo
object with data from the session manager:
IDfSessionManager manager = DfcSessionManager.getSessionManager();
A reference to AcmeCustomService’s own service context is then obtained. Notice the use of the
getContext method rather than the newContext method. The getContext method enables the calling
service to share identities and any other service context settings with the invoked service:
IServiceContext context = ContextFactory.getInstance().getContext();
The context, explicit service module name (“core”), and context root (“http://127.0.0.1:8888/services”)
is passed to the getRemoteService method to get the Schema service. (You may need to change the
hardcoded address of the remotely invoked schema service, depending on your deployment.)
ISchemaService schemaService
= ServiceFactory.getInstance()
.getRemoteService(ISchemaService.class,
context,
"core",
"http://127.0.0.1:8888/services");
Note: It is also possible to invoke DFS services locally rather than remotely in your custom service,
if the service JARs from the SDK have been packaged in your custom service EAR file. There are
a number of potential advantages to local invocation of the DFS services—improved performance,
the ability to share a registered service context between your custom service and the DFS services,
and the ability to include invocation of the DFS services within a transaction. (see Transactions in
a custom service, page 128). To enable local invocation of services, make sure that the local JAR
files for the service (for example emc-dfs-services.jar for core service and emc-dfs-search-service.jar
for search services) are included in the call to the packageService task in the Ant build.xml. Do
not include *-remote.jar files.
The getSchemaInfo operation of the Schema service is called and information from this request is
printed out:
schemaInfo = schemaService.getSchemaInfo(repositoryName, null, operationOptions);
The testExceptionHandling() method demonstrates how you can create and throw custom
exceptions. The method creates a new instance of CustomException and throws it. The client
side runtime catches the exception and recreates it on the client, preserving all of the custom
attributes. You must follow certain guidelines to create a valid custom exception that can
be thrown over the wire to the client. For more information on how to create a DFS custom
exception, see DFS exception handling, page 130. The CustomException class is located in the
%SAMPLES_LOC%/AcmeCustomService/src/service/com/acme/services/samples/common directory.
package com.acme.services.samples.impl;
import com.acme.services.samples.common.AcmeServiceInfo;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSessionManagerStatistics;
import com.emc.documentum.fs.datamodel.core.OperationOptions;
import com.emc.documentum.fs.datamodel.core.schema.SchemaInfo;
import com.emc.documentum.fs.rt.annotations.DfsPojoService;
import com.emc.documentum.fs.rt.context.ContextFactory;
import com.emc.documentum.fs.rt.context.IServiceContext;
import com.emc.documentum.fs.rt.context.ServiceFactory;
import com.emc.documentum.fs.rt.context.impl.DfcSessionManager;
import com.emc.documentum.fs.services.core.client.ISchemaService;
import com.acme.services.samples.common.CustomException;
import java.util.ArrayList;
import java.util.Iterator;
@DfsPojoService(targetNamespace = "http://samples.services.acme.com")
public class AcmeCustomService
{
return acmeServiceInfo;
}
}
build.properties
The build.properties file contains property settings that are required by the Ant build.xml file. To
build AcmeCustomService, there is no need to change any of these settings, unless you have moved
the AcmeCustomService directory to another location relative to the root of the SDK. In this case,
you need to change the dfs.sdk.home property. If you want AcmeCustomService to be automatically
copied to the deploy directory of the JBoss application server when you run the deploy target, specify
the directory in the autodeploy.dir property.
# EMC DFS SDK 6.7 build properties template
dfs.sdk.home=../..
# Compiler options
compiler.debug=on
compiler.generate.no.warnings=off
compiler.args=
compiler.max.memory=128m
fork = true
nonjava.pattern = **/*.java,**/.svn,**/_svn
# Establish the production and tests build folders
build.folder = build
module.name = samples
context.root = services
#Debug information
debug=true
keep=true
verbose=false
extension=true
#Deploy params
autodeploy.dir=C:/Documentum/jboss4.2.0/server/DctmServer_DFS/deploy
dfc.properties
The service-generation tools package a copy of dfc.properties within the service EAR file. The
properties defined in this dfc.properties file configure the DFC client utilized by the DFS service
runtime. The copy of dfc.properties is obtained from the DFS SDK etc directory. The dfc.properties
must specify the address of a docbroker that can provide access to any repositories required by the
service and its clients, for example:
dfc.docbroker.host[0]=10.8.13.190
dfc.docbroker.port[0]=1489
build.xml
The Ant build.xml file drives all stages of generating and deploying the custom service. It contains
the targets shown in Table 21, page 141, which can be run in order to generate and deploy the
custom service.
2. Edit the Java or .NET code and specify values for the following code. The call to getRemoteService
assumes that the instance of JBoss that you are deploying to is running on the local host on port
8888. Change these values if they are incorrect.
3. Run the Java consumer at the command prompt from the %DFS_SDK%/samples/Services/
AcmeCustomService directory:
ant run
Run the .NET consumer in Visual Studio.
dfs-client.xml
The dfs-client.xml file contains properties used by the Java client runtime for service addressing. The
AcmeCustomService test consumer provides the service address explicitly when instantiating the
service object, so does not use these defaults. However, it’s important to know that these defaults are
available and where to set them. The %DFS_SDK%/etc folder must be included in the classpath for
clients to utilize dfs-client.xml. If you want to place dfs-client.xml somewhere else, you must place it
in a directory named config and its parent directory must be in the classpath. For example, if you
place the dfs-client.xml file in the c:/myclasspath/config/dfs-client.xml directory, add c:/myclasspath
to your classpath.
<DfsClientConfig defaultModuleName="samples"
registryProviderModuleName="samples">
<ModuleInfo name="samples"
protocol="http"
host="127.0.0.1"
port="8888" contextRoot="services">
</ModuleInfo>
</DfsClientConfig>
1. Call the generateService task and specify, at a minimum, values for wsdllocation and destDir. For
more information on the generateService task, see generateService task, page 149
<target name ="generateServiceStubs">
<generateService
wsdlUri="http://host:port/contextroot/module/
ServiceName?wsdl"
destDir="outputDir" />
</target>
The service stub and the data model classes are placed in a directory structure that is
determined by their target namespaces. For example, if the WSDL has a target namespace
of http://module.contextroot.fs.documentum.emc.com, the service stub will be placed in the
outputDir/com/emc/documentum/fs/contextroot/module/impl directory. The data model classes
are output in a similar fashion.
2. Edit the service stub, which is located in the targetNamespace/impl directory. Each method in the
service stub throws an UnsupportedOperationException as a placeholder. It is up to you to
implement the logic and exception handling for each method.
At least one method should throw a ServiceException. This is because DFS encapsulates runtime
exceptions (exceptions that are not declared in the throws clause) in a ServiceException on the server
and passes it to the client. Throwing a ServiceException ensures that the class included with the
service when you build and package it.
3. Once the service is implemented, you can use the DFS build tools to build and package the service.
The DFS build tools rely on a set of Ant tasks that can help you create and publish services and
generate client support for a service. When developing your own services, you might need to
extend the classpaths of these tasks to include libraries that are required by your service. To see
how the tasks are used in the context of a build environment, examine the build.xml file in the
AcmeCustomService sample.
Apache Ant
The DFS design-time tools for generating services rely on Apache Ant, and were created using Ant
version 1.7.0. You will need to have installed Ant 1.7.0 or higher in your development environment to
run the DFS tools. Make sure that your path environment variable includes a path to the Ant bin
directory.
You can then call the individual tasks as described in the sample usage for each task:
• generateModel task, page 146
• generateArtifacts task, page 147
• buildService task, page 148
• packageService task, page 149
• generateService task, page 149
• generateRemoteClient task, page 150
• generatePublishManifest task, page 151
generateModel task
The generateModel Ant task takes the annotated source code as input to the tools and generates
a service model XML file named {contextRoot}-{serviceName}-service-model.xml, which describes
service artifacts to be generated by subsequent processes. The generateModel task is declared in the
emc-dfs-tasks.xml task definition file as follows:
<taskdef name="generateModel" classname="com.emc.documentum.fs.tools.
GenerateModelTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/utils/aspectjrt.jar"/>
</taskdef>
Argument Description
contextRoot Attribute representing the root of the service address. For example, in the URL
http://127.0.0.1:8080/services/ "services" signifies the context root.
moduleName Attribute representing the name of the service module.
destDir Attribute representing a path to a destination directory into which to place the
output service-model XML.
<services> An element that provides a list (a <fileset>), specifying the annotated source
artifacts.
<classpath> An element providing paths to binary dependencies.
In the sample service build.xml, the generateModel task is configured and as follows:
<generateModel contextRoot="${context.root}"
moduleName="${module.name}"
destdir="${project.artifacts.folder}/src">
<services>
<fileset dir="${src.dir}">
<include name="**/*.java"/>
</fileset>
</services>
<classpath>
<pathelement location="${dfs.sdk.libs}/dfc/dfc.jar"/>
<path refid="project.classpath"/>
</classpath>
</generateModel>
generateArtifacts task
The generateArtifacts Ant task takes the source modules and service model XML as input, and
creates all output source artifacts required to build and package the service. These include the service
interface and implementation classes, data and exception classes, runtime support classes, and
service WSDL with associated XSDs.
The generateArtifacts task is declared in the emc-dfs-tasks.xml task definition file as follows:
<taskdef name="generateArtifacts"
classname="com.emc.documentum.fs.tools.build.ant.
GenerateArtifactsTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/dfc/aspectjrt.jar"/>
</taskdef>
Argument Description
serviceModel Attribute representing a path to the service model XML created by the
generateModel task.
destDir Attribute representing the folder into which to place the output source code.
Client code is by convention placed in a "client" subdirectory, and server code
in a "ws" subdirectory.
<src> Element containing location attribute representing the location of the annotated
source code.
<classpath> An element providing paths to binary dependencies.
In the sample service build.xml, the generateArtifacts task is configured and executed as follows:
<generateArtifacts
serviceModel=
"${project.artifacts.folder}/src/${context.root}-${module.name}-service-
model.xml"
destdir="${project.artifacts.folder}/src"
api="rich">
<src location="${src.dir}"/>
<classpath>
<path location="${basedir}/${build.folder}/classes"/>
<path location="${dfs.sdk.home}/lib/emc-dfs-rt.jar"/>
<path location="${dfs.sdk.home}/lib/emc-dfs-services.jar"/>
<pathelement location="${dfs.sdk.home}/lib/dfc/dfc.jar"/>
<fileset dir="${dfs.sdk.home}/lib/ucf">
<include name="**/*.jar"/>
</fileset>
<path location="${dfs.sdk.home}/lib/jaxws/jaxb-api.jar"/>
<path location="${dfs.sdk.home}/lib/jaxws/jaxws-tools.jar"/>
<path location="${dfs.sdk.home}/lib/commons/commons-lang-
2.1.jar"/>
<path location="${dfs.sdk.home}/lib/commons/commons-io-
1.2.jar"/>
</classpath>
</generateArtifacts>
buildService task
The buildService tasks takes the original annotated source, as well as output from the buildArtifacts
task, and builds two JAR files:
• A remote client package: {moduleName}-remote.jar
• A server (and local client) package: {moduleName}.jar
The buildService task is declared in the emc-dfs-tasks.xml task definition file as follows:
<taskdef name="buildService" classname="
com.emc.documentum.fs.tools.build.ant.BuildServiceTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-
tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-
dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/dfc/
aspectjrt.jar"/>
</taskdef>
Argument Description
serviceName Attribute representing the name of the service module.
destDir Attribute representing the folder into which to place the output JAR files.
<src> Element containing location attribute representing the locations of the input
source code, including the original annotated source and the source output
by generateArtifacts.
<classpath> Element providing paths to binary dependencies.
In the sample service build.xml, the buildService task is configured as follows:
<buildService serviceName="${service.name}"
destDir="${basedir}/${build.folder}"
generatedArtifactsDir="${project.resources.folder}">
<src>
<path location="${src.dir}"/>
<path location="${project.artifacts.folder}/src"/>
</src>
<classpath>
<pathelement location="${dfs.sdk.home}/lib/dfc/dfc.jar"/>
<path refid="project.classpath"/>
</classpath>
</buildService>
packageService task
The packageService packages all service artifacts into an EAR file that is deployable to the application
server. The packageService task is declared in the emc-dfs-tasks.xml task definition file as follows:
<taskdef name="packageService"
classname="com.emc.documentum.fs.tools.build.ant.
PackageServiceTask">
<classpath location="${dfs.sdk.home}/lib/java/
emc-dfs-tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/
emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/dfc/aspectjrt.jar"/>
</taskdef>
Argument Description
deployment- Attribute representing the name of the service module. You can specify a .ear
Name or .war (for Tomcat deployment) file extension depending on the type of
archive that you want.
destDir Attribute representing the folder into which to place the output archives.
generatedArti- Path to folder in which the WSDL and associated files have been generated.
factsFolder
<libraries> Element specifying paths to binary dependencies.
<resources> Element providing paths to resource files.
In the sample service build.xml, the packageService task is configured as follows:
<packageService deploymentName="${service.name}"
destDir="${basedir}/${build.folder}"
generatedArtifactsDir="${project.resources.folder}">
<libraries>
<pathelement location="${basedir}/${build.folder}/
${service.name}.jar"/>
<pathelement location="${dfs.sdk.home}/lib/emc-dfs-rt.jar"/>
<pathelement location="${dfs.sdk.home}/lib/emc-dfs-
services.jar"/>
<pathelement location="${dfs.sdk.home}/lib/dfc/dfc.jar"/>
</libraries>
<resources>
<path location="${dfs.sdk.home}/etc/dfs.properties"/>
</resources>
</packageService>
generateService task
The generateService Ant task takes a given WSDL as input and generates a DFS annotated service
stub and its data model. You can use these generated files to create your custom service and input
them into the DFS runtime tools to generate and package your service. The location of the WSDL can
either be local (file://) or remote (http://). The generateService task is declared in the emc-dfs-tasks.xml
task definition file as follows:
<taskdef name="generateService"
classname="com.emc.documentum.fs.tools.GenerateServiceTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-
tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/utils/
aspectjrt.jar"/>
</taskdef>
Argument Description
wsdlUri The local (file://) or remote (http://) location of the WSDL
destDir Attribute representing the folder into which to place the output source code.
debug The debug mode switch (“on” or “off”)
verbose The verbose mode switch (“on” or “off”)
You can call the generateService task within a target as follows:
<generateService
wsdllocation="${wsdl.location}"
destDir="${dest.dir}"
verbose="true"
debug="false"/>
generateRemoteClient task
The generateRemoteClient Ant task takes a given WSDL as input and generates client proxies for the
service described by the WSDL. The client proxies that are generated differ from the client libraries
that are provided in the DFS client productivity layer. For more information on the differences,
see WSDL-first consumption of services, page 56. You can use these generated files to help you
create your consumer. The location of the WSDL can either be local (file://) or remote (http://). The
generateRemoteClient task is declared in the emc-dfs-tasks.xml task definition file as follows:
<taskdef name="generateRemoteClient"
classname="com.emc.documentum.fs.tools.GenerateRemoteClientTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/utils/aspectjrt.jar"/>
</taskdef>
Argument Description
wsdlUri The local (file://) or remote (http://) location of the WSDL
(required)
destdir (required) Attribute representing the folder into which to place the output source code.
serviceProtocol Either http or https (default is http)
serviceHost The host where the service is located. This value defaults to the WSDL host, so
if the WSDL is a local file, specify the host where the service is located.
Argument Description
servicePort The port of the service host. This value defaults to the WSDL host port, so if
the WSDL is a local file, specify the port where the service is located.
serviceContext- The context root where the service is deployed. This value defaults to the
Root WSDL context root, so if the WSDL is a local file, specify the context root
where the service is located.
serviceModule- The name of the service module. This value defaults to the WSDL service
Name module, so if the WSDL is a local file, specify the module where the service is
located.
All attributes except for wsdlUri and destdir are used to override values that are generated from
the WSDL by the generateRemoteClient task.
You can call the generateRemoteClient task within a target as follows:
<generateRemoteClient
wsdlUri="${wsdl.location}"
destdir="${dest.dir}"
serviceProtocol="true"
serviceHost="localhost"
servicePort="8888"
serviceContextRoot="services"
serviceModuleName="core" />
generatePublishManifest task
The generatePublishManifest task generates an XML manifest file that is taken as input by the DFS
Publish Utility. The generatePublishManifest task is declared in the emc-dfs-tasks.xml task definition
file as follows:
<taskdef name="generatePublishManifest"
classname="com.emc.documentum.fs.tools.registry.ant.GeneratePublishManifestTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar" />
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar" />
<classpath location="${dfs.sdk.home}/lib/java/utils/aspectjrt.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxr/jaxr-impl.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxr/jaxr-api.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxws/jaxb-api.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxws/jaxb-impl.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxws/jaxb1-impl.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxws/jsr181-api.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxws/jsr173_api.jar" />
<classpath location="${dfs.sdk.home}/lib/java/commons/commons-lang-2.1.jar" />
</taskdef>
Argument Description
file The output service manifest file
organization The organization to publish the services under
Argument Description
<modules> An element containing the location of the
service model file of the services that you want
to publish. You can have multiple <modules>
elements. Each <modules> element contains
<pathelement> elements that specify the location
of the service model with the “location”
attribute.
<publishset> An element containing the services that you
want to publish and the catalog and categories
that you want the services to be under.
The generatePublishManifest task requires the <modules> and <publishset> as nested elements. An
example of how to call the task is shown in the following sample:
<target name="generateManifest">
<generatePublishManifest file="example-publish-manifest.xml" organization="EMC">
<modules>
<pathelement location="services-example-service-model.xml"/>
</modules>
<publishset>
<service name="MyService1" module="example"/>
<service name="MyService2" module="example"/>
<catalog name="Catalog1"/>
<category name="Category1"/>
<category name="Category2"/>
</publishset>
</generatePublishManifest>
</target>
packaged in the same EAR file, they can all share a common cache of ServiceContext objects, which
allows you to register a context once and call any service in any module within the same EAR file.
To package multiple service modules in one EAR file:
1. Run the generateModel Ant task for each of the service modules that you want to create. Ensure
that you specify appropriate values for the following parameters:
• contextRoot — Specify the same value for each service module that you want to create. A
good value to use is “services.”
• moduleName — Specify different values for each service module that you want to create.
This value is unique to each service module and creates different service URLs for each of
your service modules.
• destDir — Specify the same value for each service module that you want to create. Using the
same destination directory ensures that the service modules get packaged into the same
EAR file.
For example, if you want to create service modules with URLs at /services/core, /services/bpm,
and /services/search, your generateModel tasks might look like the following:
<generateModel contextRoot="services"
moduleName="bpm"
destdir="build/services">
...
</generateModel>
<generateModel contextRoot="services"
moduleName="search"
destdir="build/services">
...
</generateModel>
2. Run the generateArtifacts Ant task for each service module that you want to create. For example,
given the output generated by the example above, your generateArtifacts tasks should look like
the following:
<generateArtifacts serviceModel="build/services/services-
core-service-model.xml"
destdir="build/services">
...
</generateArtifacts>
<generateArtifacts serviceModel="build/services/services-bpm-service-model.xml"
destdir="build/services">
...
</generateArtifacts>
<generateArtifacts serviceModel="build/services/services-search-service-model.xml"
destdir="build/services">
...
</generateArtifacts>
3. Run the buildService Ant task for each service of the service modules that you want to create.
For example, given the output generated by the examples above, your buildService tasks should
look like the following:
<buildService serviceName="core"
destdir="dist/services"
generatedArtifactsDir="build/services">
...
</generateArtifacts>
<buildService serviceName="bpm"
destdir="dist/services"
generatedArtifactsDir="build/services">
...
</generateArtifacts>
<buildService serviceName="search"
destdir="dist/services"
generatedArtifactsDir="build/services">
...
</generateArtifacts>
4. Run the packageService task once to package all of your service modules together in the same
EAR file. For example, given the output generated by the examples above, your packageService
task should look like the following:
<packageService deploymentName="emc-dfs"
destDir="dist/services"
generatedArtifactsDir="build/services">
...
</packageService>
You should now have all of your service modules packaged into one EAR file, which can be
deployed in your application server.
Generating C# proxies
To generate C# proxies for the custom service, use the DfsProxyGen.exe utility supplied in the DFS
SDK. DfsProxyGen is a Windows form application that generates C# proxies based on a DFS service
WSDL and the generateArtifacts ant task (see generateArtifacts task, page 147). You will need to build
and deploy the service before creating the C# proxies.
Note: You must run the DfsProxyGen utility locally and not from a network drive.
To generate C# proxies:
1. In the Shared assemblies field, add any shared assemblies used by the service. (There are none
for AcmeCustomService.) For more information on this see Creating shared assemblies for data
objects shared by multiple services, page 156.
2. In the Service model file field, browse to the service model file created by the
generateArtifacts ant task. For AcmeCustomService this will be emc-dfs-sdk-6.
7\samples\AcmeCustomService\resources\services-samples-service-model.xml.
3. In the Wsdl uri field, supply the name of the WSDL of the deployed service, for example
http://localhost:7001/services/samples/AcmeCustomService?wsdl. Only URLs are permitted, not
local file paths, so you should use the URL of the WSDL where the service is deployed.
4. In the Output namespace, supply a namespace for the C# proxy (for example
samples.services.acme).
5. Optionally supply a value in the Output FileName field. If you don’t supply a name, the proxy
file name will be the same as the name of the service, for example AcmeCustomService.cs.
6. Click Create proxy.
The results of the proxy generation will appear in the Log field. If the process is successful, the name
and location of the result file will be displayed.
1. Run DfsProxyGen against the WSDL and service model file for ServiceA.
This will generate the proxy source code for the service and its data classes DataClass1 and
DataClass2.
2. Create a project and namespace for the shared classes, DataClass1 and DatasClass2, that will be
used to build the shared assembly. Cut DataClass1 and DataClass2 from the generated proxies
source generated for ServiceA, and add them to new source code file(s) in the new project.
3. Annotate the shared data classes using XmlSerializer’s [XmlType()] attribute, specifying the
WSDL namespace of the shared classes (for example XmlType(Namespace=http://myservices/
datamodel/)].
4. Build an assembly from the shared datamodel project.
5. Run DfsProxyGen against the WSDL and service model for ServiceB, referencing the shared
assembly created in step 4 in the Shared assemblies field.
DFS supports standard WS transfer modes (Base64 and MTOM), as well as proprietary technologies
(UCF and ACS) that optimize transfer of content in distributed environments. This chapter will cover
content transfer generally, with an emphasis on MTOM and Base64, as well as accessing content from
ACS (Accelerated Content Services).
UCF content transfer is covered in a separate chapter (see Chapter 11, Content Transfer with Unified
Client Facilities).
For related information see Content model and profiles, page 95.
Content transfer is an area where the productivity layer (PL) provides a lot of functionality, so there
are significant differences in client code using the productivity layer and client code based on the
WSDL alone. This chapter provides examples showing how to do it both ways. The WSDL-based
samples in this chapter were written using JAX-WS RI 2.1.2.
This chapter includes the following topics:
• Base64 content transfer, page 157
• MTOM content transfer, page 159
• ContentTransferMode, page 163
• Content types returned by DFS, page 164
• Uploading content using Base64 or MTOM, page 165
• Downloading content using Base64 and MTOM, page 167
• Downloading UrlContent, page 169
A DFS Base64 message on the wire encodes binary data within the Contents element of a DataObject.
The following is an HTTP POST used to create an object with content using the DFS object service
create method.
POST /services/core/ObjectService HTTP/1.1
SOAPAction: ""
Content-Type: text/xml;charset="utf-8"
Accept: text/xml, multipart/related, text/html, image/gif, image/jpeg,
*; q=.2, */*; q=.2
User-Agent: JAX-WS RI 2.1.3-b02-
Host: localhost:8888
Connection: keep-alive
Content-Length: 996463
<ns4:Value>gif</ns4:Value>
</ns4:Properties>
</ns3:Properties>
<ns3:Contents xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns5:BinaryContent"
pageNumber="0"
format="gif">
<ns5:renditionType xsi:nil="true"/>
<ns5:Value>R0lGODlhAAUABIc...[Base64-encoded content]
</ns5:Value>
</ns3:Contents>
</ns3:DataObjects>
</dataPackage>
</ns8:create>
</S:Body>
</S:Envelope>
Content-Type: application/xop+xml;charset=utf-8;type="text/xml"
Content-Transfer-Encoding: binary
<Include xmlns="http://www.w3.org/2004/08/xop/include"
href="cid:85f284b5-4f2c-4e68-8d08-de160a5b47c6@example.
jaxws.sun.com"/>
</ns5:Value>
</ns3:Contents>
</ns3:DataObjects>
</dataPackage>
</ns8:create>
</S:Body>
</S:Envelope>
GIF89a[binary data...]
Workarounds
content. However, be aware that in this case the application will be limited to transfer
of content in the range of hundreds of megabytes for a 32-bit JVM, because on most
modern 32-bit Windows systems the maximum heap size will range from 1.4G to 1.6G (see
http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_heap_32bit). Although this
specific limitation will not apply to a 64-bit versions of Windows, the issue will still exist if you do
not ensure that there is sufficient heap space to buffer very large objects in memory.
• You can create a custom service. Due to a WCF limitation (see http://msdn.microsoft.com/en-us/
library/ms789010.aspx) wherein the stream data transfer mode is supported only when there is
a single parameter in the web service method signature. Therefore, in the custom service, all
parameters must be wrapped into a single custom class object containing all input parameters of
a method as follows:
@DfsPojoService()
public class StreamingService
{
public DataPackage create(DataRequest request) throws
ServiceException
{
// DataRequest wraps DataPackage and OperationOptions,
the DataPackage might contain large content
// do something with the content uploaded
......
}
}
In the App.config file, to enable streaming, set the transferMode attribute of DfsDefaultService
binding to Streamed.
Note:
• For downloading and uploading content, increase the time-out related attributes (closeTimeout,
openTimeout, receiveTimeout, and sendTimeout) for DfsDefaultService binding based on the
requirement.
• For content downloading, in App.config file, increase the value of the maxReceivedMessageSize
attribute of DfsDefaultService binding to a larger value such as 1000000000 bytes. This
ContentTransferMode
The DFS ContentTransferMode is a setting that is used to influence the transfer mode used by DFS.
This section discusses how this setting applies in various types of clients. For a closely related
discussion of what types and content transfer mechanisms can be expected from DFS based on this
setting and other factors, see Content types returned by DFS, page 164.
In the DFS WSDLs ContentTransferMode is defined as follows:
<xs:simpleType name="ContentTransferMode">
<xs:restriction base="xs:string">
<xs:enumeration value="BASE64"/>
<xs:enumeration value="MTOM"/>
<xs:enumeration value="UCF"/>
</xs:restriction>
</xs:simpleType>
WSDL-based clients — In WSDL-based clients, ContentTransferMode influences the data type and
content transfer format that DFS uses to marshall content in HTTP responses. In a WSDL-based
client, ContentTransferMode only applies to content download from DFS, and in this context, only
Base64 and MTOM are relevant. To use the UCF content transfer mechanism, you need to write
client-side code that delegates the content transfer to UCF. WSDL-based clients in particular need
to be aware that DFS will use UrlContent in preference to MTOM or Base64 to transfer content, if
ACS is available (see Content types returned by DFS, page 164).
ContentTransferMode precedence
The ContentTransferMode setting can be stored locally on the client and passed to the service in a
number of different contexts.
• In a ContentTransferProfile stored in the service context.
• In a ContentTransferProfile passed in OperationOptions
The value passed in OperationOptions will take precedence over the setting in the service context.
Note: Currently, you will not be able to use ContentTransferMode in Content instance.
Value Behavior
ALWAYS Return UrlContent where URL content is available; fail with exception where URL
content is not available.
NEVER Return actual content; never return UrlContent.
ONLY Return UrlContent where URL content is available; return no content in DataObject
where URL content is not available.
PREFER Return UrlContent where URL content is available; return actual content where URL
content is not available.
The default value is PREFER.
If you are writing a WSDL-only client that does not use the productivity layer, then your code needs
to be aware at runtime of the content type returned by DFS and handle it appropriately. If you are
using the productivity layer, the PL provides convenience methods that support handling all Content
subtypes in a uniform way, and transparently handle the streaming of UrlContent to the client. To
see some sample code comparing these two approaches see Downloading content using Base64 and
MTOM, page 167 and Downloading UrlContent, page 169.
The transfer mode used to send the content over the wire is determined by the client
framework—in the case of this example by whether MTOM is enabled on the JAX-WS
ServicePort. The following snippet shows one means of enabling MTOM by passing an instance of
javax.xml.ws.soap.MTOMFeature when getting the service port from the service.
String objectServiceURL = contextRoot + "/core/ObjectService";
ObjectService objectService = new ObjectService(
new URL(objectServiceURL),
new QName("http://core.services.fs.documentum.emc.com/",
"ObjectService"));
servicePort = objectService.getObjectServicePort
(new MTOMFeature());
If you are using the productivity layer, the productivity layer runtime checks the
ContentTransferMode setting and takes care of converting the content type to an appropriate subtype
before invoking the remote service. The transfer mode used for the upload is determined by the
runtime, also based on the ContentTransferMode setting.
if (!testFile.exists())
{
throw new IOException("Test file: " + testFile.toString() +
" does not exist");
}
contentProfile.setFormatFilter(FormatFilter.ANY);
For more information see ContentProfile, page 97 and ContentTransferProfile, page 99.
The following is an example of a WSDL-based client method (JAX-WS) that shows content download
using the object service get operation. This method examines the type of the Content returned by the
operation, extracts the content value as a byte array and writes it to a file.
ContentTransferProfile contentTransferProfile =
new ContentTransferProfile();
contentTransferProfile.setTransferMode(m_transferMode);
DataPackage dp = m_servicePort.get(objectIdentitySet,
operationOptions);
The following productivity layer example does something similar; however it can use the
Content#getAsFile convenience method to get the file without knowing the concrete type of the
Content object.
Downloading UrlContent
UrlContent objects contain a string representing an ACS (Accelerated Content Services) URL. These
URLs can be used to retrieve content using HTTP GET, with the caveat that they are set to expire,
so they can’t be stored long term. In a distributed environment with configured network locations,
ACS content transfer enables transfer of content from the nearest network location based on the
geoLocation setting in the ContentTransferProfile.
A client can get UrlContent explicitly using the Object service getContentUrls operation. UrlContent
can also be returned by any operation that returns content if an ACS server is configured and active
on the Content Server where the content is being requested, and if the requested content is available
via ACS. Clients that do not use the productivity layer should detect the type of the content returned
by an operation and handle it appropriately. In addition, the ACS URL must be resolvable when
downloading the UrlContent.
Note: The expiration time for an ACS URL can be configured by setting the default.validation.delta
property in acs.properties. The default value is 6 hours. For more information see the EMC
Documentum Content Server Distributed Configuration Guide.
A client that does not use the productivity layer needs to handle UrlContent that results from a get
operation or a getContentUrls operation by explicitly downloading it from ACS. The following
JAX-WS sample extracts the UrlContent from the results a get operation, then passes the URL and a
FileOutputStream to a second sample method, which downloads the ACS content to a byte array
which it streams to the FileOutputStream.
ContentTransferProfile contentTransferProfile =
new ContentTransferProfile();
contentTransferProfile.setTransferMode(m_transferMode);
DataPackage dp = m_servicePort.get(objectIdentitySet,
operationOptions);
The following sample method does the work of reading the content from ACS to a buffer and
streaming it to an OutputStream.
private void downloadContent (String url, OutputStream os)
throws IOException
{
InputStream inputStream;
inputStream = new BufferedInputStream(new URL(url).openConnection().
getInputStream());
int bytesRead;
byte[] buffer = new byte[16384];
while ((bytesRead = inputStream.read(buffer)) > 0)
{
os.write(buffer, 0, bytesRead);
}
If on the other hand you are using the productivity layer, the PL runtime does most of the work
behind the scenes. When retrieving content using a get operation, you can call getAsFile on the
resulting content object without knowing its concrete type. If the type is UrlContent, the runtime
will retrieve the content from ACS and write the result to a file.
The following example gets UrlContent explicitly using the Object service getContentUrls function
and writes the results to a file.
Unified Client Facilities (UCF) is an EMC technology that orchestrates direct transfer of content
between a client computer and a Documentum repository. UCF is fully integrated with DFS, and can
be employed as the content transfer mechanism in many types of DFS consumer application. The DFS
SDK provides client libraries to support UCF content transfer in Java and in .NET. The Java and .NET
libraries are integrated into the DFS productivity layer runtime to simplify usage by productivity
layer applications. Applications that do not use the productivity layer can use the UCF client libraries
directly in their applications outside of the DFS productivity layer runtime. Web applications can
package the UCF client libraries into an applet or an ActiveX object to enable UCF content transfer
between a browser and a Content Server. Clients that use the .NET libraries do not need to have a
Java Runtime Engine installed on their system.
This chapter discusses the use of UCF for content transfer in a DFS context, and includes the
following sections:
• Overview of Unified Client Facilities, page 173
• Tips and tricks, page 179
• Tutorial: Using UCF in a Java client, page 183
• Tutorial: Using UCF .NET in a .NET client, page 193
System requirements
DFS provides both Java and .NET UCF integrations. For the Java integration, JRE 1.5.0_12 or later and
DFS 6.0SP1 or later are required. For the native .NET integration, .NET Framework 3.5 SP 1 or later
and DFS 6.7 or later are required. DFS .NET productivity layers prior to 6.7 are rely on the Java UCF
integration and require a JRE to be installed in the client environment.
In DFS 6.7 or later, native .NET UCF client-side components are supported, either as an ActiveX
object (for web applications) or as a .NET assembly (for thick clients), and no JRE is required on
the client machine. Native DFS UCF .NET integration on the client side will require DFS services
version 6.7 or later on the server side.
UCF is a stateful protocol relying on HTTP sessions to preserve state. DFS requires more than one trip
to the server side to establish a UCF connection. For this reason, it is required to use sticky session
based load balancing so that all requests that are part of the same HTTP session are routed to the
same backend node.
Note:
• JSESSIONID is preserved by DFS for session management such as load balancing. The client
should not use JSESSIONID cookie for other usages.
• Ensure you do not pass JSESSIONID cookie to UCF server unless this cookie is returned by a
previous UCF connection.
The following example demonstrates how to use same HTTP session for different UCF connections
// Create an initial UCF connection with cookies, passedInCookies must
not contain JSESSIONID
UCF failover is not supported as a result. In case of a node failure the whole UCF transfer process,
including establishing a new UCF connection must be restarted.
Once an HTTP session is established, DFS will reuse it for the same service instance to avoid any
load balancing issues. (More accurately, DFS will reuse the same HTTP session for the same
service, provided that the client application does not update the ActivityInfo instance in the
ContentTransferProfile instance.) DFS will throw the following exception if the consumer tries to
override the existing HTTP JSESSIONID value with a different one:
Can not execute DFS call: provided HTTP session id "xxx" overrides
original HTTP session id "xxx". Please use original HTTP session
for DFS calls.
Doing otherwise will cause the call to fail in load-balanced environments.
DFS-orchestrated UCF
DFS-orchestrated UCF is the model in which the DFS runtime takes full responsibility of initiating the
UCF connection and performing the content transfer between the client and the server. Developers
can use any type of DFS Content with this model, except for UcfContent as it implies that the DFS
runtime does not participate in the content transfer process. To enable DFS-orchestrated UCF, it is
enough to set the ContentTransferMode to UCF in the ServiceContext, as shown in the Java listing
below:
IServiceContext context = ContextFactory.getInstance().newContext();
ContentTransferProfile profile = new ContentTransferProfile();
profile.setTransferMode(ContentTransferMode.UCF);
context.setProfile(profile);
A typical use of DFS-orchestrated UCF would be a thick client invoking the DFS remote web services
API.
Client-orchestrated UCF
Client-orchestrated UCF is the model in which the process of establishing a UCF connection is
delegated to the client rather than the DFS runtime.
The process of establishing a UCF connection consists of a set of steps that must be taken in a specific
order for the procedure to succeed. First of all, a UCF installer must be downloaded from the server
side. It will check whether a UCF client is already present in the environment and if it is, whether it
needs to be upgraded or not. Before the UCF installer can be executed, it is necessary to confirm its
author and whether its integrity has been compromised or not. This is achieved through digitally
signing the installer and verifying the file signature on the client side. The downloaded UCF installer
is executed only if it considered trusted. Once running, it will install and launch the UCF client and,
eventually, request a UCF connection ID from the UCF server once the process is successful.
To encapsulate this complexity, DFS provides the UcfConnection class. This class takes the URL of
the UCF server as a constructor argument and allows the developer to obtain a UCF connection ID
through a public method call. The provided URL should point to the location of the UCF installer and
ucf.installer.config.xml on the remote server. This class is available for both Java and .NET consumers.
Both Java and .NET DFS Productivity Layers rely on UcfConnection to establish UCF connections.
A typical use of client-orchestrated UCF would be a browser client, a web application, in which UCF
is downloaded from the server, and the client code invokes the UcfConnection class to establish
theUCF connection ID.
The advantage of browser integration is that the content can be transferred from the client machine
directly to the content server, without being stored on any intermediary tier. For this to happen, the
browser has to establish the UCF connection from the client machine to the DFS server. Once this is
done, the UCF connection details along with the path of the file(s) to be transferred must be provided
to the web application which initiates the transfer as a client orchestrated UCF.
In a Java environment, UCF integration is accomplished using an applet. The applet will need
"ucf-connection.jar", which is part of the DFS SDK in the classpath. The following code snipped can
be used in the applet to establish a UCF connection:
UcfConnection c = new UcfConnection(new URL
(getParameter("ucf-server")));
uid = c.getUid();
jsessionId = c.getJsessionId();
where ucf-server has is a string representing the DFS service context, such as
"http://host:port/context-root/module", for example "http://localhost:8888/services/core". The values
of "uid" and "jsessionId" must be passed on to the browser and eventually, to the web application
initiating the UCF content transfer. One way of passing on these values to the browser is through the
JSObject plugin, which allows Java to manipulate objects that are defined in JavaScript.
In a .NET environment the UCF integration can be accomplished using an ActiveX object, which
will have to reference and package "Emc.Documentum.FS.Runtime.Ucf.dll", which is part of the
SDK. The following method can be defined in the C# based ActiveX component to establish a UCF
connection and return its ID:
[ComVisible(true)]
public string GetUid(String jsessionId, String url)
{
UcfConnection c = new UcfConnection(new Uri(url), jsessionId, null);
return c.GetUcfId();
}
As with the Java integration, the UCF connection ID (uid) and "jsessionId" must be passed to the web
application initiating the UCF content transfer.
The client application can orchestrate UCF content transfer using the established UCF connection.
This can be done in a client that uses DFS in local mode, or in a client that uses the DFS web services.
In either case, to enable this type of integration the application developer has to provide the HTTP
and UCF session IDs to the DFS runtime through an ActivityInfo instance:
ContentTransferProfile profile = new ContentTransferProfile();
profile.setTransferMode(ContentTransferMode.UCF);
profile.setActivityInfo(new ActivityInfo(jsessionId, null,
ucfId, null));
serviceContext.setProfile(profile);
Applications that do not use the productivity layer must, in addition to setting the transfer mode and
activity info on the service context, provide explicit UcfContent instances in the DataObject:
UcfContent content = new UcfContent();
content.setLocalFilePath("path-fo-file-on-the-client-machine");
DataObject object = new DataObject();
object.getContents().add(content);
Authentication
UCF does not have any built-in authentication mechanisms. It is controlled from the server side by
DFC, which begins the content transfer only after authenticating the user. This leaves the door open
for Denial of Service attacks as clients can establish as many UCF connections as they wish.
HTTP-proxy-based SSO solutions like WebSEAL, ClearTrust and SiteMinder address this concern
by allowing only authenticated HTTP requests into the protected web object space. Thus, if a UCF
server is part of the protected object space, only users authenticated by the SSO proxy would be
able to establish a UCF connection.
To establish a secure UCF connection, you must add the SSO cookie to the UcfConnection constructor.
UcfConnection connection = UcfConnection(ucfServerUrl, cookieHeader,
targetDeploymentId);
service using either method described in Alternative methods of supplying ActivityInfo and their
relative precedence, page 179. The ActivityInfo should have the following settings:
If the consumer sets autoCloseConnection to false, the consumer is responsible for closing the
connection. This can be accomplished by setting autoCloseConnection to true before the consumer
application’s final content transfer using that connection. If the consumer fails to do this, the UCF
connection will be left open, and the UCF client process will not be terminated.
This optimization removes the overhead of launching the UCF client multiple times. It is only
effective in applications that will perform multiple content transfer operations between the same
endpoints. If possible, this overhead can be more effectively avoided by packaging multiple objects
with content in the DataPackage passed to the operation.
Note: If high performance content transfer is required for UCF.NET, you must initialize the
autoCloseConnection property of ActivityInfo class to FALSE. This setting is not applicable for
Java UCF.
The batchimport method does the work of setting up the context and DataObject and invoking
the create operation:
private static void batchImport(IServiceContext theContext,
ActivityInfo theInfo)
{
try
{
OperationOptions theOptions = new OperationOptions();
ContentTransferProfile theTransferProfile =
new ContentTransferProfile();
theTransferProfile.setTransferMode(ContentTransferMode.UCF);
theTransferProfile.setActivityInfo( theInfo );
theOptions.setContentTransferProfile( theTransferProfile );
ContentProfile theContentProfile = new ContentProfile();
theContentProfile.setFormatFilter( FormatFilter.NONE );
theOptions.setContentProfile( theContentProfile );
theOptions.setPropertyProfile( getPropertyProfile() );
}
theObjectService.create(theDataPackage, theOptions);
System.out.println("File imported");
}
catch ( Exception e )
{
System.out.println( e.getMessage());
}
return;
}
If instead you create a new ActivityInfo for each operation, the server will start fresh UCF connections
and the optimization will not work.
Value Description
Null or empty string Take no action.
dfs:view Open the file in view mode using the application associated with the
file type by the Windows operating system.
dfs:edit Open the file in edit mode using the application associated with the
file type by the Windows operating system.
dfs:edit?app=_EXE_ Open the file for editing in a specified application. To specify
the application replace _EXE_ with a fully-qualified path to the
application executable; or with just the name of the executable. In
the latter case the operating system will need to be able to find the
executable; for example, in Windows, the executable must be found
on the %PATH% environment variable. Additional parameters can be
passed to the application preceded by an ampersand (&).
Requirements
UCF is dependent on the availability of a Java 5 or 6 JRE on the client machine to which the UCF jar
files are downloaded. It determines the Java location using the JAVA_HOME environment variable.
7. Create a servlet for orchestrating the UCF content transfer, page 190
The Apache reverse proxy can be configured by including the following elements in the httpd.conf file:
P# ProxyPass
# enables Apache as forwarding proxy
The sample HTML presents the user with two buttons and a text box. When the user clicks the Use
Ucf button, a second popup is launched while the UCF connection is established by the applet. When
the applet finishes, the second windows closes and the user can import a file specified by a file
path entered in the text box.
Note: This sample has been implemented with two buttons for demonstration purposes. A button
with the sole function of creating the UCF connection would probably not be a useful thing to have
in a production application. Make sure not to click this button then close the browser without
performing the import: this will leave the UCF client process running.
var winPop;
function OpenWindow()
{
scrollbars=0,
resizable=0,
width=300,
height=400";
winPop = window.open("dfsSample-popup.html", "winPop", props);
}
function validate()
{
if(document.form1.jsessionId.value == "" ||
document.form1.uid.value=="")
{
alert("UCF connection is not ready, please wait");
return false;
}
else if(document.form1.file.value == "")
{
alert("Please enter a file path");
return false;
}
else
{
return true;
}
</script>
</head>
<body>
<h2>DFS Sample</h2>
<form name="form1"
onSubmit="return validate()"
method="post"
action="http://localhost:80/dfsWebApp/DfsServiceServlet">
Enter File Path: <input name="file" type="text" size=20><br>
<input name="jsessionId" type="hidden"><br>
<input name="uid" type="hidden"><br>
Note that hidden input fields are provided in the form to store the jsessionId and uid values that will
be obtained by the applet when it instantiates the UcfConnection.
function setHtmlFormIdsFromApplet()
{
if (arguments.length > 0)
{
window.opener.document.form1.jsessionId.value = arguments[0];
window.opener.document.form1.uid.value = arguments[1];
}
window.close();
</script>
</head>
<body>
<center><h2>Running Applet ........</h2><center>
<center>
<applet CODE=SampleApplet.class
CODEBASE=/dfsWebApp
WIDTH=40
HEIGHT=100
ARCHIVE="dfsApplet.jar"><
/applet>
</center>
</body>
</html>
The popup HTML downloads the applet, and also includes a Javascript function for setting values
obtained by the applet in dfsSample.html (see HTML for user interface, page 196). The applet will use
the Java Plug-in to call this JavaScript function.
import java.applet.*;
import java.net.URL;
import netscape.javascript.JSObject;
{
UcfConnection conn = new UcfConnection(new URL("http://localhost:80/
services/core"));
System.out.println("jsessionId=" + conn.getJsessionId() + ",
uid=" + conn.getUid());
JSObject win = JSObject.getWindow(this);
win.call("setHtmlFormIdsFromApplet", new Object[]
{conn.getJsessionId(),
conn.getUid()});
}
catch (Exception e)
{
e.printStackTrace();
}
}
public void start ()
{
}
}
The applet launches a UCF client process on the end-user machine, which establishes a connection to
the UCF server, obtaining the jsessionId and the uid for the connection. It uses Java Plug-in JSObject
to call the JavaScript function in the HTML popup, which sets the jsessionId and uid values in the
user interface HTML form, which will pass them back to the servlet.
The applet that you construct must contain all classes from the following archives, provided in the
SDK:
• ucf-installer.jar
• ucf-connection.jar
• emc-dfs-rt.jar
To create the applet, extract the contents of these two jar files and place them in the same folder with
the compiled SampleApplet class, shown in the preceding step. Bundle all of these classes into a new
jar file called dfsApplet.jar.
Applets must run in a secure environment, and therefore must include a signed RSA certificate issued
by a certification authority (CA), such as VeriSign or Thawte. The certificate must be imported by
the end user before the applet code can be executed. You can obtain a temporary certificate for test
purposes from VeriSign, and sign the jar file using the Java jarsigner utility. Detailed instructions
regarding this are available at http://java.sun.com/javase/6/docs/technotes/guides/plugin/developer_
guide/rsa_signing.html#signing.
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.ServletException;
import java.io.IOException;
import java.io.PrintWriter;
try
{
IObjectService service = getObjectService(req);
DataPackage dp = new DataPackage();
DataObject vo = new DataObject(new ObjectIdentity(docbase),
"dm_document");
vo.getProperties().set("object_name", "testobject");
int fileExtIdx = file.lastIndexOf(".");
(fileExtIdx + 1));
vo.getContents().add(content);
dp.addDataObject(vo);
return service;
}
Note that you will need to provide values for username, password, and docbase fields to enable
DFS to connect to your test repository.
In the sample, the getObjectService method does the work of obtaining the jsessionId and the uid
from the http request.
String jsessionId = req.getParameter("jsessionId");
String uid = req.getParameter("uid");
Notice that in addition to the jsessionId and uid, the ActivityInfo is instantiated with two other values.
The first, which is passed null, is the initiatorSessionId. This is a DFS internal setting to which the
consumer should simply pass null. The second setting, which is pass true, is autoCloseConnection.
Setting this to true (which is also the default), causes DFS to close the UCF connection after the
service operation that transfers content. For more information on using this setting see Optimization:
controlling UCF connection closure, page 179.
Finally, getObjectService instantiates the Object service using the newly created context.
IObjectService service = ServiceFactory.getInstance().
getRemoteService(
IObjectService.class, context, "core", serverUrl +
"/services");
return service;
The key is that the context has been set up to use the UCF connection to the UCF client running on the
end user machine, obtained by the applet rather than the standard connection to the UCF client machine.
The doPost method finishes by using the service to perform a test transfer of content, using the
Object service create method.
To accomplish the same task as the getObjectService method without the productivity layer, you
need to generate proxies for the ContextRegistryService and ObjectService using a tool like the
JAX-WS reference implementation or Axis2. You can then use these proxies to create the ActivityInfo,
ContentTransferProfile, and ServiceContext objects as well as the ObjectService. Because the
generated proxies contain only default constructors, you have to use set methods to set values for the
specific instance variables instead of passing them as arguments into the constructor. The following
code demonstrates how to create the ActivityInfo, ContentTransferProfile, and ServiceContext objects
with proxies that are generated by Axis2 1.4:
RepositoryIdentity identity = new RepositoryIdentity();
identity.setRepositoryName(docbase);
identity.setUserName(username);
identity.setPassword(password);
context serviceContext = new ServiceContext();
context.getIdentities().add(identity);
ActivityInfo activity = new ActivityInfo();
activity.setSessionId(jsessionId);
activity.setInitiatorDeploymentId(null);
activity.setActivityId(uid);
activity.setClosed(true);
ContentTransferProfile ct = new ContentTransferProfile();
ct.setTransferMode(ContentTransferMode.UCF);
ct.setActivityInfo(activity);
context.getProfiles().add(ct);
You can then instantiate the ObjectService with the ServiceContext factory method. Applications that
do not use the productivity layer must, in addition to setting the transfer mode and activity info on
the service context, provide explicit UcfContent instances in the DataObject:
UcfContent content = new UcfContent();
content.setLocalFilePath("path-fo-file-on-the-client-machine");
DataObject object = new DataObject();
object.getContents().add(content);
Requirements
UCF .NET depends on the availability of .NET framework 3.5SP1 on the client machine on which the
UCF assembly files are downloaded.
For simplicity, we installed the Apache proxy server and application server on the same machine. The
following network topology depicts the test application.
The Apache reverse proxy can be configured by including the following elements in the httpd.conf file:
# ProxyPass
# enables Apache as forwarding proxy
The sample HTML presents the user to import a file with UCF .NET.
Note: This HTML has been used for testing the ActiveX component within a CAB file provided by
DFS SDK.
Javascript is used in the HTML header to launch the UcfLauncherCtrl ActiveX control and place
values required by DFS in the form fields.
function startUcf()
{
try {
var ucfClient = document.getElementById("UcfLauncherCtrl");
ucfClient.init();
ucfClient.start();
}
catch (e) {
alert("Fail to stat Ucf client: " + e.message);
}
}
</script>
</head>
The UcfLauncher control in UcfClient.aspx is used to pass parameters for starting the ActiveX object
with some startup parameters. It is referenced in the HTML body as follows:
<body onload="startUcf()" text="#0033cc">
<object id="UcfLauncherCtrl" classid="CLSID:<%= UcfUtil.
UcfLauncherClassId %>"
codebase="<%= UcfUtil.UcfLauncherControlUri %>">
<param name="CONTX_PATH" value="<%= UcfUtil.UcfContextPath %>" />
<param name="CLICKONCE_REL_PATH" value="/../" />
<param name="UCF_LAUNCHER_CONFIG" value="<%= UcfUtil.UcfInstallerConfigBase64 %>" />
<param name="UCF_REQ_KEY_VALUE" value="reqKey" />
<param name="PARAM_UCF_LAUNCHER_HEADERS" value="headers" />
<param name="PARAM_UCF_LAUNCH_EXCLUDE_COOKIES" value="excludeCookies=ABCD" />
<param name="PARAM_UCF_LAUNCHER_MODE" value="2" />
Note: The UcfLauncher.cab file serves as a remote resource file in the codebase attribute for
this sample.
Although the UcfLauncher.cab file is not packaged in the emc-dfs.ear or emc-dfs.war file, you can
download the DFS UcfLauncher.cab file from the the emc-dfs.ear or emc-dfs.war file by following
these steps:
• Locate the UcfLauncher.cab file in the DFS-SDK under emc-dfs-sdk-6.7\lib\java\ucf\browser.
• Package UcfLauncher.cab file as emc-dfs.ear\services-core.war\UcfLauncher.cab
After you deploy DFS, the CAB file can be downloaded from http://localhost:8080/services/core/
UcfLauncher.cab#Version=6,50,0,220.
UCF .NET supports 32-bit and 64-bit browsers. DFS SDK provides two CAB files for use:
• For 32 bit ActiveX, use the UcfLauncher.cab file.
• For 64 bit ActiveX, use the UcfLauncher64.cab file.
You can locate the CAB files, in the DFS SDK, under emc-dfs-sdk-6.7\lib\java\ucf\browser.
Web server, which hosts the DFS consumer, determines which CAB file must be installed on the
client, based on the request.
Note: You can implement your own ActiveX control implementation to establish UCF
connection. To implement it, you must refer to Emc.Documentum.FS.Runtime.Ucf.dll and
UcfInstaller.dll in your ActiveX control.
The Import functionality will receive UCF ID and use it for DFS service operation.
<td class="style3">
<input id="ImportPath" type="file" />
</td>
<td>
<asp:Button ID="ImportButton" runat="server"
onclick="ImportButton_Click" Text="Import"
OnClientClick="getUcfId()"/>
</td>
</tr>
</table>
return context;
}
DFS provides an integration with the Netegrity SiteMinder Policy Server and RSA ClearTrust Server
single sign-on plug-ins, which are available with Content Server.
A Kerberos plug-in is also available, but has a different interface, documented in Chapter 13, Using
Kerberos Authentication in DFS Clients. The information in this chapter does not apply to Kerberos
authentication.
Or in .NET:
SsoIdentity(HttpRequest request)
If the client has credentials in the form of a user name and token string, the client can set the user
name and token string in an alternate constructor as shown in the sample below. The SsoIdentity,
like other objects of the Identity data type, is set in the service context and used in instantiating
the service object:
public void callSchemaServiceSso(String token) throws Exception
{
Note that SsoIdentity, like its parent class BasicIdentity, does not encapsulate a repository name.
SsoIdentity, like BasicIdentity, will be used to login to any repositories in the service context whose
credentials are not specified in a RepositoryIdentity. You can use SsoIdentity in cases where the
login is valid for all repositories involved in the operation, or use SsoIdentity as a fallback for a
subset of the repositories and supply RepositoryIdentity instances for the remaining repositories.
Also note that because SsoIdentity does not contain repository information, the user name and
password is authenticated against the designated global registry. If there is no global registry
defined, authentication fails.
You can provide a new SSO token with each request to handle SSO tokens that constantly change
and whose expiration times are not extended on every request. Note however, that a ServiceContext
object should contain only one SsoIdentity, so when you add a new SsoIdentity to the ServiceContext,
you should discard the old one.
Documentum Content Server supports Kerberos authentication since version 6.7, which provides
a secure Single-Sign-On (SSO) solution using Windows Integrated Authentication. Content Server
supports Kerberos using the Microsoft Active Server Domain Services for Kerberos Key Distribution
Center (KDC) services in the following ways:
• In a single domain.
• In two-way trusts between multiple domains in the same forest only; that is, cross-forest trusts are
not supported.
Note: The DFS client and server must be in the same domain, whereas Content Server can be in a
different domain.
The DFS 6.7 web services can be configured to use server-side JAX-WS handlers that interface with
the Content Server Kerberos implementation. In addition, the DFS 6.7 SDK includes new classes that
support Kerberos authentication for local Java clients, remote Java clients, and .NET clients. DFS
SOAP clients that do not use the support classes in the SDK can authenticate against DFS web services
using WS-Security headers that comply with the Kerberos Token Profile 1.1 specification.
The DFS Kerberos API deals specifically with transferring authentication information to the
DFS service, using either a remote web service call or a local Java API call. The API does not
address obtaining Kerberos tickets from the Kerberos Key Distribution Center (KDC). Because DFS
applications are multi-tiered, Kerberos integration is based on delegated authentication. All Kerberos
tokens provided to DFS through the web services API must be “forwardable”. The local Java API
accepts only Kerberos Ticket Granting Tickets (TGTs).
This chapter focuses specifically on the use of the DFS Kerberos API to integrate DFS-based
consumers local or remote DFS services that interact with Content Server instances that are enabled
for Kerberos authentication. General information about Kerberos, as well as details regarding
obtaining service tickets from a Kerberos Key Distribution Center (KDC) are outside the scope of this
documentation. The following documents may be useful in that they address matters pertaining to
Kerberos that are not addressed here.
For general information on Kerberos, refer to:
http://web.mit.edu/Kerberos/
http://technet.microsoft.com/en-us/library/bb742516.aspx
For information on the Java GSS API:
http://java.sun.com/products/jndi/tutorial/ldap/security/gssapi.html
http://java.sun.com/j2se/1.5.0/docs/guide/security/jgss/tutorials/index.html
For additional information on Kerberos single sign-on in Java refer to http://java.sun.com/j2se/1.5.
0/docs/guide/security/jgss/single-signon.html.
A BinaryIdentity would typically be populated with a Ticket Granting Ticket and then added to a
list of identities in the service context:
IObjectService service = ServiceFactory.getInstance()
.getLocalService(IObjectService.class, context);
Figure 23. Web application using local DFS and Kerberos authentication
In steps 1–4 in the diagram a browser client obtains a service ticket from the KDC and passes it to
web application as a SPNEGO token.
Steps 5–7 are the critical steps from the point of view of DFS support:
• In steps 5 the web application calls Kerberos utility static methods to extract the ST from the
SPNEGO token, and in step 6 the web application calls the Kerberos utility again to accept the
ST and get a Ticket Granting Ticket (TGT) as a result. These steps could be performed with a
helper method like the following:
public String getDFSTGTfromSPNEGOToken(String SPNEGO)
throws GSSException
{
String dfs_st = KerberosUtility.getSTFromSpenegoToken(SPNEGO);
if (dfs_st != null)
{
return KerberosUtility.accept(m_source_spn, dfs_st);
}
return null;
}
Here m_source_spn is a string containing the SPN of the service to be accepted (that is, the SPN
of the web application). The result is a java.lang.Object representing the TGT that can be passed
to the DFS local API.
• In step 7 the web client instantiates a BinaryIdentity using the result returned by the Kerberos
utility and sets the identity in the serviceContext.
Note: Registration of the service context containing the Kerberos credentials is not currently
supported.
IObjectService service = ServiceFactory.getInstance()
.getLocalService(IObjectService.class, context);
In steps 8 and on, DFC uses the TGT to obtain STs from the Kerberos utility for every repository
involved in the operation. These STs have the same login information as the original ST received
from the client, and use Kerberos delegation (provided by the Kerberos utility) to enable Content
Server to authenticate the credentials. (These steps are initiated by the DFS runtime and do not
require any code in your application.)
Note: When implementing the Kerberos authentication in a multi-domain environment, you must
observe the following rules:
• The source SPN accepted in the Kerberos utility cannot end with any realm name.
• The JAAS LoginModule to accept the SPN must be generated by using the Quest library. Add the
following line to your DFS local client code before Kerberos handling happens which notifies the
Quest library of the Kerberos name servers:
System.setProperty("jcsi.kerberos.nameservers", "KDC machine-IP
address");
• You cannot pass the local initialized TGT to BinaryIdentity. The TGT must be generated by
using the KerberosUtility API.
In Step 1 in the diagram, the DFS client is already assumed to be in possession of the ST obtained
from the KDC. If the client is using the DFS SDK libraries, the DFS client sets the ST in a client-side
JAX-WS handler or WCF behavior. The JAX-WS handler or WCF behavior takes care of serializing
the Kerberos service ticket in the SOAP WS-Security header. On the server, server-side JAX-WS
handlers take care of validating the Kerberos service ticket using the Kerberos utility (steps 3 and 4),
and passing the ticket to the DFC layer for authentication on the Content Server (steps 5–9).
Note: Due to the Kerberos V5 anti-replay mechanism, each DFS request has to carry a unique service
ticket.
From a DFS integration perspective, the main responsibility of the DFS consumer is to provide the
ST that it has obtained from the KDC for the DFS service to client-side JAX-WS handlers (Java) or
WCF behaviors (.NET). The following sections describe the APIs provided in the DFS SDK for Java
and .NET consumers for this purpose. JAX-WS and WCF clients that do not use the productivity
layer can make use of the custom JAX-WS SOAP handler or WCF endpoint behavior provided in the
DFS SDK. Other types of SOAP clients will need to ensure that the Kerberos ticket is contained in
the WS-Security as defined in the Oasis Kerberos Token Profile 1.1 specification. A SOAP sample
excerpted from this specification is shown in Kerberos Token 1.1 security header, page 211.
SOAP processing, as shown in the following example (You can find more Java client samples from
the SDK in the emc-dfs-sdk-6.7\samples\security\kerberos\DfsJavaKerberosDemo
directory.)
The getRemoteService method is overloaded so that it can pass a list of JAX-WS handlers that the
framework will invoke when creating the SOAP message.
Note: A JAX-WS client that does not use the full DFS productivity layer could also use the
KerberosTokenHandler to add serialization of the Kerberos token to JAX-WS SOAP processing, by
adding it to the handler chain without using the getRemoteService productivity-layer method.
The GetRemoteService method has been overloaded so that it can pass a list of custom behaviors
that the framework will invoke when creating the SOAP message.
Note:
• A WCF client that does not use the full DFS productivity layer could also use the
KerberosTokenHandler to add serialization of the Kerberos token to WCF SOAP processing, by
adding the custom endpoint behavior without using the getRemoteService productivity-layer
method.
• To generate the service ticket for .NET users, the Kerberos delegation level needs to be enabled by
setting the Impersonation Level to Delegate (ImpersonationLevel.Delegate).
IBM Tivoli Access Manager for e-business WebSEAL is a high-performance, multi-threaded web
server that applies fine-grained security policy to a protected network. WebSEAL incorporates
back-end web application server resources into its security policy, and can provide single sign-on
(SSO) solutions. WebSEAL acts as a reverse web proxy by receiving HTTP or HTTPS requests from
a web browser and delivering content from its own web server or from back-end web application
servers. Requests passing through WebSEAL are evaluated by its own authorization service to
determine whether the user is authorized to access the requested resource.
EMC Documentum can integrate with WebSEAL, its SSO solution, or any other SSO solution
supported by WebSEAL.
The related IBM documentation at http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp
provides more information about installing and configuring the WebSEAL server.
In a Documentum/WebSEAL integration the relationship of trust is established among all subsystems
within a WebSEAL-protected web object space. Once a request is allowed into the object space,
subsystems within the space will trust that the request is properly authorized and authenticated.
There is no second-pass authentication performed for these requests at any layer of the Documentum
stack (that is, DFS, DFC, or Content Server). The Documentum subsystems will have no means to
verify whether or not a request has been genuinely authenticated by WebSEAL, so to avoid any
security threats, the network must be configured in such a way that the WebSEAL proxy is the
only point of access to the Documentum system.
To complete the trusted subsystem pattern, there needs to be a separate relationship of trust
established between the DFS/DFC tier and Content Server so that only trusted DFS/DFC instances
are granted access to repositories. This is accomplished using DFC Principal Authentication, in which
DFC uses a trusted principal to log onto Content Server on behalf of a user without authentication.
For more information see Configuring a trust relationship between DFS/DFC and Content Server,
page 219.
Note: For detailed information on WebSEAL, refer to the WebSEAL Administration Guide.
Client integration
In a WebSEAL deployment all authentication is performed by a WebSEAL, which is trusted by
other subsystems within the WebSEAL-protected web object space. The client might provide a
prompt for the login information (in the case of user name/password authentication, for example), or
might obtain all the required authentication information transparently for the user (as in the case of
Kerberos or smartcard/certificate based authentication). Regardless of the means of obtaining the
login information, it is provided to WebSEAL for authentication. Once a successfully authenticated
request is allowed through, WebSEAL returns a state cookie named PD-S-SESSION-ID which can be
reused by the client to avoid authenticating subsequent requests.
From a DFS perspective there are two client-side integration types: a browser integration for a DFS
based web application, and a native web services integration (often a desktop application). Both can
follow the HTTP SSO interaction pattern just described.
Browser integration
Most of the time, WebSEAL integrations will not require significant effort on the part of the web
application developer to adopt a new authentication mechanism, because WebSEAL is optimized
for this type of integration. Browsers can transparently handle HTTP redirects to login pages,
or authentication mechanisms based on HTTP Negotiate. This makes for easy integration with
WebSEAL as it enables the proxy to display a username/password prompt in the browser window
(in the case of redirects), and to negotiate Kerberos or other authentication tokens transparently for
both the end user and the web application. An expired session cookie can be gracefully handled by
negotiating a new one using forms-based or any other type of authentication.
DFS Productivity layer consumers rely on proprietary DFS authentication schemes, as well as
standard WS-Security specifications, which WebSEAL does not support. As a result, authenticating
to WebSEAL using the DFS productivity layers “out-of-the-box” is not possible. However, it is
possible to reuse established WebSEAL sessions by setting the PD-S-SESSION-ID on the outgoing
web service request. DFS recommends implementing a custom client-side JAX-WS handler or .NET
WCF endpoint behavior for this purpose.
The following productivity-layer API should be used to add a JAX-WS handler to the client-side
handler chain:
public <T> T getRemoteService(
Class<T> wsInterface, IServiceContext serviceContext,
String serviceModule, String contextRoot,
List<Handler> handlerChain
) throws ServiceInvocationException
The following C# API can be used to add a custom WCF endpoint behavior:
public T GetRemoteService<T>(
IServiceContext serviceContext, String serviceModule,
String contextRoot, List<IEndpointBehavior> behaviors
)
The DFS SDK provides a sample JAX-WS client-side SOAP handler, CookieSettingJaxwsHandler.java,
and a client-side WCF endpoint behavior, CookieSettingWcfBehavior.cs, which you can use as a
templates for your custom client-side handler or behavior.
DFS web services consumers that do not use the productivity layer integrate with WebSEAL using
the same technique as productivity-layer consumers, except that they must use the API provided by
their web services framework to add the custom SOAP handler to the handler chain or, in WCF, to
add a custom endpoint behavior to the client.
Some WSDL-only consumers might decide to use HTTP Basic Authentication for web services to
integrate with WebSEAL. When doing so, they should keep in mind that this kind of integration
might not be possible when UCF content transfer is required as UCF connections need an established
WebSEAL session to initialize.
The DFS SDK provides a sample JAX-WS client-side SOAP handler, CookieSettingJaxwsHandler.java,
and a client-side WCF endpoint behavior, CookieSettingWcfBehavior.cs, which you can use as a
templates for your custom client-side handler or behavior.
UCF integration
A UCF connection will not be successfully established unless it is provided a valid WebSEAL session.
In the case of both browser-based and thick-client UCF integrations, the UCF connection has to
“piggyback” on an established WebSEAL session by providing a valid PD-S-SESSION-ID cookie to a
UcfConnection instance, using the following API:
public UcfConnection(URL ucfServerUrl, String targetDeploymentId,
com.emc.documentum.fs.rt.ucf.Cookie... cookies) throws UcfException
DFS-orchestrated UCF content transfer is not supported with WebSEAL authentication, so you
must use client-orchestrated UCF to provide the PD-S-SESSION-ID cookie when establishing a UCF
connection, as shown in the following listing:
IServiceContext context = ContextFactory.getInstance().newContext();
UcfConnection c = new UcfConnection(
JAX-WS web service consumers require a WSDL, either local or downloaded from a remote server to
instantiate (see JAXWS Issue 876). It is technically not possible to download WSDLs from servers in
a WebSEAL protected web object space. To work around this issue it is recommended to use any
of the following solutions:
• use the JDK’s CookieHandler/CookieManager to set a system-wide cookie handler
• exclude the WSDL URLs from the protected web object space
• use a local copy of the service WSDLs
,
Java productivity-layer consumers can set the following DFS runtime property to instruct the DFS
runtime to use the local WSDLs shipped with the DFS SDK:
dfs.wsdl.location=file:///${emc-dfs-sdk}/etc/wsdl
Set this property in the dfs-client.xml client configuration file as shown in the example below:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<DfsClientConfig defaultModuleName="core"
registryProviderModuleName="core">
<ModuleInfo .../>
<DfsClientProperty name="dfs.wsdl.location"
value="file:///C:/shared/emc-dfs-sdk-6.7/etc/wsdl"/>
</DfsClientConfig>
WSDL-only consumers have to explicitly provide the path to a local WSDL when instantiating their
javax.xml.ws.Service objects.
Note: .NET productivity layer consumers are not affected by this limitation as WCF does not require
a WSDL to instantiate a web services consumer.
UrlContent
There is no API to provide cookies when working with UrlContent in this release. Customers are
expected to handle remote content download independently in this case. A UrlContent instance
exposes the location (URL) of the content for this purpose.
Server-side integration
Part of the simplicity and flexibility of the WebSEAL integration comes from the fact that no
matter what authentication mechanism the client uses to negotiate the session cookie (HTTP Basic
Authentication, certificate-based authentication, Kerberos tokens, and so on), the subsystems that
receive the request via the WebSEAL proxy always gets the same login information. The login
information can be in any combination of the following formats:
• HTTP iv-user header
• HTTP iv-creds header
• LTPA cookie
For a trusted subsystem authentication pattern, an iv-user header is convenient because it contains
the user name of the authenticated user, so this is recommended for server-side integration. The
format of the login information (which can be any of those listed above or a combination), can be
specified using a configuration setting on the WebSEAL server.
From the perspective of a DFS application developer, the server-side integration can either deal with:
• a set of applications that all support a trusted subsystem authentication pattern
• or with a set of applications some of which support and some of which do not support explicit
authentication.
and obtain a user name from it. The user name can then be set in a PrincipalIdentity, as shown in the
preceding sections, which enables DFC to log in on behalf of the pre-authenticated user.
Note that to accomplish this, the DFC instance in the web application must either be configured as
privileged, or there must be a superuser account configured in trust.properties on the application class
path (see Configuring a trust relationship between DFS/DFC and Content Server, page 219).
Note that to authenticate on behalf of the user identified in the PrincipalIdentity, the DFC instance
packaged with DFS must either be configured as privileged, or there must be a superuser account
configured in trust.properties on the application class path (see Configuring a trust relationship
between DFS/DFC and Content Server, page 219).
The -j junction option provides an additional feature to modify the cookie name by prepending
a special string:
AMWEBJCT!<jct-name>!
For example, if a cookie named JSESSIONID arrives across a junction called /jctA, the cookie name
is changed to :
AMWEBJCT!jctA!JSESSIONID
However, as a front-end application of the WebSEAL proxy server, the DFS client depends on the
JSESSIONID cookie for its operations. Therefore, the JSESSIONID cookie-renaming behavior should
be disabled. There are two options for accomplishing this:
• Preserve the names of all cookies.
Prevent renaming of non-domain cookies across a specific -j junction by configuring that junction
with the -n option.
• Preserve the names of specified cookies.
The name entry in the [preserve-cookie-names] stanza of the WebSEAL configuration file allows
you to list the specific cookie names that are not to be renamed by WebSEAL. For example:
[preserve-cookie-names]
name = JSESSIONID
For further details, refer to "Handling cookies from servers across multiple -j junctions" in
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=/com.ibm.itame.doc/am611_
webseal_admin620.htm.
This chapter provides a general orientation for users of DFC who are considering creating DFS client
applications. It compares some common DFC interfaces and patterns to their functional counterparts
in DFS. This document covers the following topics:
• Fundamental differences, page 223
• Login and session management, page 224
• Creating objects and setting attributes, page 227
• Versioning, page 234
• Querying the repository, page 239
• Starting a workflow, page 242
Fundamental differences
DFS is a service-oriented API and an abstraction layer over DFC. DFS is simpler to use than DFC
and will allow development of client applications in less time and with less code. It also greatly
increases the interoperability of the Documentum platform and related technologies by providing
WSDL interface to SOAP clients generally, as well as client libraries for both Java and .NET. However,
because it exposes a data model and service API that are significantly different from DFC, it does
require some reorientation for developers who are used to DFC.
When programming in DFS, some of the central and familiar concepts from DFC are no longer
a part of the model.
Session managers and sessions are not part of the DFS abstraction for DFS consumers. However,
DFC sessions are used by DFS services that interact with the DFC layer. The DFS consumer sets up
identities (repository names and user credentials) in a service context, which is use to instantiate
service proxies, and with that information DFS services take care of all the details of getting and
disposing of sessions.
DFS does not have (at the exposed level of the API) an object type corresponding to a SysObject.
Instead it provides a generic DataObject class that can represent any persistent object, and which is
associated with a repository object type using a property that holds the repository type name (for
example “dm_document”). Unlike DFC, DFS does not generally model the repository type system
(that is, provide classes that map to and represent repository types). Any repository type can be
represented by a DataObject, although some more specialized classes can also represent repository
types (for example an Acl or a Lifecycle).
In DFS, we’ve chosen to call the methods exposed by services operations, in part because this is what
they are called in the WSDLs that represent the web service APIs. Don’t confuse the term with DFC
operations—in DFS the term is used generically for any method exposed by the service.
DFS services generally speaking expose a just a few service operations (the TaskManagement service
is a notable exception). The operations generally have simple signatures. For example the Object
service update operation has this signature:
DataPackage update(DataPackage dataPackage, OperationOptions options)
However, this “simple” operation provides a tremendous amount of power and flexibility. It’s just
that the complexity has moved from the number of methods and the complexity of the method
signature to the objects passed in the operation. The operation makes a lot of decisions based on the
composition of the objects in the DataPackage and relationships among those objects, and on profiles
and properties provided in the operationOptions parameter or set in the service context—these
settings are used to modify the default assumptions made by the service operation. The client spends
most of its effort working with local objects, rather than in conversation with the service API.
Understanding Sessions
To do any work in a repository, you must first get a session on the repository. A session (IDfSession)
maintains a connection to a repository, and gives access to objects in the repository for a specific
logical user whose credentials are authenticated before establishing the connection. The IDfSession
interface provides a large number of methods for examining and modifying the session itself, the
repository, and its objects, as well as for using transactions (refer to IDfSession in the Javadocs for a
complete reference).
A session manager (IDfSessionManager) manages sessions for a single user on one or more
repositories. You create a session manager using the DfClient.newSessionManager factory method.
The session manager serves as a factory for generating new IDfSession objects using the
IDfSessionManager.newSession method. Immediately after using the session to do work in the
repository, the application should release the session using the IDfSessionManager.release() method
in a finally clause. The session initially remains available to be reclaimed by session manager instance
that released it, and subsequently will be placed in a connection pool where it can be shared.
To get a session manager, encapsulate a set of user credentials in an IDfLoginInfo object and pass
this with the repository name to the IDfSessionManager.setIdentity method. In simple cases,
where the session manager will be limited to providing sessions for a single repository, or where
the login credentials for the user is the same in all repositories, you can set a single identity to
IDfLoginInfo.ALL_DOCBASES (= *). This causes the session manager to map any repository name
for which there is no specific identity defined to a default set of login credentials.
/**
* Creates a simplest-case IDfSessionManager
* The user in this case is assumed to have the same login
* credentials in any available repository
*/
public static IDfSessionManager getSessionManager
(String userName, String password) throws Exception
{
// create a client object using a factory method in DfClientX
If the session manager has multiple identities, you can add these lazily, as sessions are requested.
The following method adds an identity to a session manager, stored in the session manager referred
to by the Java instance variable sessionMgr. If there is already an identity set for the repository
name, setIdentity will throw a DfServiceException. To allow your method to overwrite existing
identities, you can check for the identity (using hasIdentity) and clear it (using clearIdentity) before
calling setIdentity.
public void addIdentity
(String repository, String userName, String password)
throws DfServiceException
{
// create an IDfLoginInfo object and set its fields
if (sessionMgr.hasIdentity(repository))
{
sessionMgr.clearIdentity(repository);
}
sessionMgr.setIdentity(repository, loginInfo);
}
Note that setIdentity does not validate the repository name nor authenticate the user credentials.
This normally is not done until the application requests a session using the getSession or newSession
method; however, you can authenticate the credentials stored in the identity without requesting a
session using the IDfSessionManager.authenticate method. The authenticate method, like getSession
and newSession, uses an identity stored in the session manager object, and throws an exception if the
user does not have access to the requested repository.
import com.emc.documentum.fs.rt.context.DfcSessionManager;
...
Using DFC, you can create an empty document object, then populate it with values you provide
at runtime.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
finally
{
// Always, always release the session when you're finished.
sessionManager.release(mySession);
}
}
public static final String DM_DOCUMENT = "dm_document";
}
Using DFC, you can create a folder object by instantiating a new folder object, then setting its name
and parent.
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
// If the folder doesn't already exist, set its name and parent,
// then save the folder.
if (aFolder == null)
{
newFolder.setObjectName(folderName);
newFolder.link(parentName);
newFolder.save();
return true;
}
You can set attributes on an object directly by type. Most often, you will have a specific control that
will set a specific data type. Alternatively, this example queries for the data type of the attribute name
the user supplies, then uses a switch statement to set the value accordingly.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfType;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.common.IDfTime;
mySession = sessionManager.getSession(repositoryName);
// Query the object to get the correct data type for the attribute.
int attributeDatatype = sysObj.getAttrDataType(attributeName);
StringBuffer results = new StringBuffer("");
case IDfType.DF_INTEGER:
sysObj.setInt(attributeName,
Integer.parseInt(attributeValue));
break;
case IDfType.DF_STRING:
sysObj.setString(attributeName, attributeValue);
break;
case IDfType.DF_TIME:
DfTime newTime =
new DfTime(attributeValue, IDfTime.DF_TIME_PATTERN2);
sysObj.setTime(attributeName, newTime);
break;
case IDfType.DF_UNDEFINED:
sysObj.setString(attributeName, attributeValue);
break;
// Use the fetch() method to verify that the object has not been
// modified.
if (sysObj.fetch(null))
{
results = new StringBuffer("Object is no longer current.");
}
else
{
// If the object is current, save the object with the new attribute
// value.
sysObj.save();
results.append("\nNew value: " + attributeValue);
}
return results.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return "Set attribute command failed.";
}
finally
{
Note: In the following example and other examples in this document, it is assumed that the service
object (proxy) has already been instantiated and is stored as an instance variable. For a more linear
example that uses a local variable for the service object, see Querying the repository in DFS, page 240.
Working with properties this way, you deal more directly with the Content Server metadata model
than working with encapsulated data in DFC classes that represent repository types.
public DataPackage createWithContentDefaultContext(String filePath)
throws ServiceException
{
File testFile = new File(filePath);
if (!testFile.exists())
{
throw new RuntimeException("Test file: " +
testFile.toString() +
" does not exist");
You can also create relationship between objects (such as the relationship between an object and a
containing folder or cabinet, or virtual document relationships), so that you actually pass in a data
graph to the operation, which determines how to handle the data based on whether the objects
already exist in the repository. For example, the following creates a new (contentless) document and
links it to an existing folder.
public DataObject createAndLinkToFolder(String folderPath)
{
// create a contentless document to link into folder
String objectName = "linkedDocument" +
System.currentTimeMillis();
String repositoryName = defaultRepositoryName;
ObjectIdentity sampleObjId =
new ObjectIdentity(repositoryName);
DataObject sampleDataObject =
new DataObject(sampleObjId, "dm_document");
sampleDataObject.getProperties().set("object_name", objectName);
return sampleDataObject;
}
Versioning
This section compares techniques for checkin and checkout of object in DFC and DFS.
The execute method of an IDfCheckoutOperation object checks out the documents defined for the
operation. The checkout operation:
• Locks the documents
• Copies the documents to your local disk
• Always creates registry entries to enable DFC to manage the files it creates on the file system
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckoutNode;
import com.documentum.operations.IDfCheckoutOperation;
try
{
// Instantiate a client.
IDfClientX clientx = new DfClientX();
If any node corresponds to a document that is already checked out, the system does not check it out
again. DFC does not treat this as an error. If you cancel the checkout, however, DFC cancels the
checkout of the previously checked out node as well.
DFC applies XML processing to XML documents. If necessary, it modifies the resulting files to ensure
that it has enough information to check in the documents properly.
The execute method of an IDfCheckinOperation object checks documents into the repository. It
creates new objects as required, transfers the content to the repository, and removes local files if
appropriate. It checks in existing objects that any of the nodes refer to (for example, through XML
links).
Check in a document as the next major version (for example, version 1.2 would become version 2.0).
The default increment is NEXT_MINOR (for example, version 1.2 would become version 1.3).
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckinNode;
import com.documentum.operations.IDfCheckinOperation;
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the version increment. In this case, the next major version
// ( version + 1)
cio.setCheckinVersion(IDfCheckinOperation.NEXT_MAJOR);
// After the item is created, you can get it immediately using the
// getNewObjectId method.
The following are considerations when you are creating a custom checkin operation.
To check in a document, you pass an object of type IDfSysObject or IDfVirtualDocument, not the file on
the local file system, to the operation’s add method. In the local client file registry, DFC records the path
and filename of the local file that represents the content of an object. If you move or rename the file,
DFC loses track of it and reports an error when you try to check it in.
Setting the content file, as in IDfCheckinNode.setFilePath, overrides DFC’s saved information.
If you specify a document that is not checked out, DFC does not check it in. DFC does not treat
this as an error.
You can specify checkin version, symbolic label, or alternate content file, and you can direct DFC to
preserve the local file.
If between checkout and checkin you remove a link between documents, DFC adds the orphaned
document to the checkin operation as a root node, but the relationship between the documents no
longer exists in the repository.
Executing a checkin operation normally results in the creation of new objects in the repository. If
opCheckin is the IDfCheckinOperation object, you can obtain a complete list of the new objects
by calling
IDfList list = opCheckin.getNewObjects();
The list contains the object IDs of the newly created SysObjects.
In addition, the IDfCheckinNode objects associated with the operation are still available after
you execute the operation. You can use their methods to find out many other facts about the new
SysObjects associated with those nodes.
resultDp = versionControlService.checkout(objIdSet,
operationOptions);
}
catch (Exception e)
{
e.printStackTrace();
throw new RuntimeException(e);
}
System.out.println("Checkout successful");
versionControlService.cancelCheckout(objIdSet);
System.out.println("Checkout cancelled");
return resultDp;
}
String ownerName
)
{
IDfSession mySession = null;
try
{
mySession = sessionManager.getSession(repositoryName);
IDfQuery query = new DfClientX().getQuery();
query.setDQL("select r_object_id from dm_document " +
"where owner_name=" + ownerName);
IDfCollection co = query.execute(session,
IDfQuery.DF_READ_QUERY );
return co;
}
}
import java.util.List;
import com.emc.documentum.fs.datamodel.core.CacheStrategyType;
import com.emc.documentum.fs.datamodel.core.DataObject;
import com.emc.documentum.fs.datamodel.core.DataPackage;
import com.emc.documentum.fs.datamodel.core.OperationOptions;
import com.emc.documentum.fs.datamodel.core.content.ContentTransferMode;
import com.emc.documentum.fs.datamodel.core.context.RepositoryIdentity;
import com.emc.documentum.fs.datamodel.core.profiles.ContentTransferProfile;
import com.emc.documentum.fs.datamodel.core.properties.PropertySet;
import com.emc.documentum.fs.datamodel.core.query.PassthroughQuery;
import com.emc.documentum.fs.datamodel.core.query.QueryExecution;
import com.emc.documentum.fs.datamodel.core.query.QueryResult;
import com.emc.documentum.fs.rt.ServiceException;
import com.emc.documentum.fs.rt.context.ContextFactory;
import com.emc.documentum.fs.rt.context.IServiceContext;
import com.emc.documentum.fs.rt.context.ServiceFactory;
import com.emc.documentum.fs.services.core.client.IQueryService;
/**
* This class demonstrates how to code a typical request to a DFS core service
* (in this case QueryService). The code goes through the steps of creating a
* ServiceContext, which contains repository and credential information, and
* calling the service with the profile.
*
* This sample assumes that you have a working installation
* of DFS that points to a working Content Server.
*
*/
public class QueryServiceTest
{
/************************************************************
* You must supply valid values for the following fields: */
/***********************************************************/
/* The module name for the DFS core services */
private static String moduleName = "core";
private IServiceContext serviceContext;
public QueryServiceTest()
{
}
Starting a workflow
DFC provides a rich interface into Workflow functionality. DFS as of release 6 SP1 has a much more
limited interface which supports fetching information about workflow templates and metadata and
starting a workflow.
// workflow attachment
info.addWorkflowAttachment("dm_sysobject", wfAttachment);
// packages
List<ProcessPackageInfo> pkgList = info.getPackages();
for (ProcessPackageInfo pkg : pkgList)
{
pkg.addDocuments(docIds);
pkg.addNote("note for " + pkg.getPackageName() + " "
+ noteText, true);
}
// alias
if (info.isAliasAssignmentRequired())
{
List<ProcessAliasAssignmentInfo> aliasList
= info.getAliasAssignments();
for (ProcessAliasAssignmentInfo aliasInfo : aliasList)
{
String aliasName = aliasInfo.getAliasName();
String aliasDescription = aliasInfo.getAliasDescription();
int category = aliasInfo.getAliasCategory();
if (category == 1) // User
{
aliasInfo.setAliasValue(userName);
}
else if (category == 2 || category == 3) // group,
user or group
{
aliasInfo.setAliasValue(groupName);
}
// Performer.
if (info.isPerformerAssignmentRequired())
{
List<ProcessPerformerAssignmentInfo> perfList
= info.getPerformerAssignments();
for (ProcessPerformerAssignmentInfo perfInfo : perfList)
{
int category = perfInfo.getCategory();
int perfType = perfInfo.getPerformerType();
String name = "";
List<String> nameList = new ArrayList<String>();
if (category == 0) // User
{
name = userName;
}
else if (category == 1 || category == 2) //
Group, user or group
{
name = groupName;
}
else if (category == 4) // work queue
{
name = queueName;
}
nameList.add(name);
perfInfo.setPerformers(nameList);
ObjectIdentity wf = workflowService.startProcess(info);
System.out.println("started workflow: " + wf.getValueAsString());
A ContentTransferProfile, 99
address context, service. See service context
service, 129 custom service, 119
annotation
best practices, 126 D
data type, 125
data graph, 105
fields, 127
data model, DFS, 81
service, 124
data type
Ant, 145
annotating, 125 to 126
Ant targets, 140
DataObject, 82
Ant tasks
as data graph, 105
buildService, 148
compound, 107
generateArtifacts, 147
compound with references, 108
generateModel, 146
standalone, 106
packageService, 149
with references, 106
ArrayProperty, 91
DataPackage, 81
<at least one index entry>, 223
DepthFilter, 111
attribute. See property
dfc.properties, 140
DFS. See Documentum Foundation Services
B dfs-client.xml, 53, 76, 142
base64, 99, 157 @DfsBofService, 124
BasicIdentity, 50, 74 @DfsPojoService, 124
Branch Office Caching Services, 99 Document Query Language, 84 to 85, 97
build.properties, 139 Documentum Foundation Classes
build.xml, 140 configuration, 140
buildService task, 148 Documentum Foundation Services, 15
DQL. See Document Query Language
C
client library, .NET, 67
E
compound DataObject, 107 editor, opening document in, 98, 182
with references, 108 Enterprise Content Services, 20
compound permissions, 101
content, 95, 157 G
Content model, 95
generateArtifacts task, 147
content transfer, 157
generateModel task, 146
ContentProfile, 97
geoLocation, 99
ContentTransferMode, 163
getNewObjects method, 238
base64, 157
MTOM, 159
N R
ReferenceRelationship, 102
namespace
returning DataObject as, 110
overriding default, 130
registering service context, 53, 76
secondary, 129
relationship, 102
filters, 111
O object, 102, 110
ObjectId, 85 reference, 102, 110
ObjectIdentity, 84 removing, 104, 109
ObjectIdentitySet, 86 TargetRole, 104
ObjectPath, 85 RelationshipIntentModifier, 104
ObjectRelationship, 102 RelationshipProfile, 110
removing, 109 DepthFilter, 111
returning DataObject as, 110 repeating property, 91
OperationOptions, 55, 79 deleting, 92
orphan documents, 238 RepositoryIdentity, 50, 74
resultDataMode, 110
P
packages S
of custom service, 129 service
packageService task, 149 address, 129
PermissionProfile, 101 annotating, 124
permissions, 100 best practices, 121
compound, 101 custom, 119