DFC Development Guide
DFC Development Guide
Foundation Classes
Version 7.2
Development Guide
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Legal Notice
Copyright © 1994-2015 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS
OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Adobe and Adobe PDF
Library are trademarks or registered trademarks of Adobe Systems Inc. in the U.S. and other countries. All other trademarks
used herein are the property of their respective owners.
Documentation Feedback
Your opinion matters. We want to hear from you regarding our product documentation. If you have feedback about how we can
make our documentation better or easier to use, please send us your feedback directly at [email protected].
Table of Contents
Preface ................................................................................................................................ 11
Chapter 1 Getting Started with DFC ............................................................................. 13
What Is DFC? ................................................................................................... 13
Where Is DFC? ................................................................................................. 14
DFC programming resources ............................................................................ 15
DFC documentation set................................................................................. 15
DFC developer support ................................................................................. 15
DFC online reference documentation ............................................................. 16
Using dfc.properties to configure DFC ............................................................... 16
BOF and global registry settings .................................................................... 16
Connecting to the global registry ............................................................... 17
Performance tradeoffs ............................................................................... 17
Diagnostic settings........................................................................................ 17
Diagnostic mode ...................................................................................... 18
Configuring docbrokers ............................................................................ 18
dfc.data.dir .............................................................................................. 18
Tracing options......................................................................................... 18
XML processing options................................................................................ 19
Search options .............................................................................................. 19
Storage policy options................................................................................... 19
Performance tradeoffs ................................................................................... 19
Registry emulation ....................................................................................... 19
Microsoft Object Linking and Embedding (OLE) ........................................ 20
Client file system locations ............................................................................ 20
New dfc.properties Key to Turn Off Trusted Login ......................................... 20
Using DFC logging ........................................................................................... 20
List of Figures
List of Tables
This manual describes EMC Documentum Foundation Classes (DFC). It provides overview and
summary information.
The DFC developer support, page 15 provides an introduction to other developer resources.
Intended Audience
This manual is for programmers who understand how to use Java and are generally familiar with the
principles of object oriented design.
Revision History
The following changes have been made to this document.
What Is DFC?
DFC is a key part of the Documentum software platform. While the main user of DFC is other
Documentum software, you can use DFC in any of the following ways:
• Access Documentum functionality from within one of your company’s enterprise applications.
For example, your corporate purchasing application can retrieve a contract from your
Documentum system.
• Customize or extend products such as Webtop.
For example, you can modify Webtop functionality to implement one of your company’s business
rules.
• Write a method or procedure for Content Server to execute as part of a workflow or document
lifecycle.
For example, the procedure that runs when you promote an XML document might apply a
transformation to it and start a workflow to subject the transformed document to a predefined
business process.
You can view Documentum functionality as having the following elements:
Repositories One or more places where you keep the content and associated metadata of your
organization’s information. The metadata resides in a relational database, and the
content resides in various storage elements.
Content Software that manages, protects, and imposes an object oriented structure on the
Server information in repositories. It provides tools for managing the lifecycles of that
information and automating processes for manipulating it.
Client Software that provides interfaces between Content Server and end users. The most
programs common clients run on application servers (for example, Webtop).
End Users People who control, contribute, or use your organization’s information. They use a
browser to access client programs running on application servers.
In this view of Documentum functionality, Documentum Foundation Classes (DFC) lies between
Content Server and clients. Documentum Foundation Services are the primary client interface to
the Documentum platform. Documentum Foundation Classes are used for server-side business
logic and customization.
DFC is Java based. As a result, client programs that are Java based can interface directly with DFC.
When application developers use DFC, it is usually within the customization model of a Documentum
client, though you can also use DFC to develop the methods associated with Content Server
functionality, such as document lifecycles.
In the Java application server environment, Documentum client software rests on the foundation
provided by the Web Development Kit (WDK). This client has a customization model that allows you
to modify the user interface and also implement some business logic. However, the principal tool for
adding custom business logic to a Documentum system is to use the Business Object Framework
(BOF).
BOF enables you to implement business rules and patterns as reusable elements, called modules. The
most important modules for application developers are type based objects (TBOs), service based
objects (SBOs), and Aspects. Aspect modules are similar to TBOs, but enable you to attach properties
and behavior on an instance-by-instance basis, independent of the target object’s type.
BOF makes it possible to extend some of DFC’s implementation classes. As a result, you can introduce
new functionality in such a way that existing programs begin immediately to reflect changes you
make to the underlying business logic.
The EMC Documentum Content Server Fundamentals Guide provides a conceptual explanation of the
capabilities of Content Server. DFC provides a framework for accessing those capabilities. Using
DFC and BOF makes your code much more likely to survive future architectural changes to the
Documentum system.
Where Is DFC?
DFC runs on a Java virtual machine (JVM), which can be on:
• The machine that runs Content Server.
For example, to be called from a method as part of a workflow or document lifecycle, or to be
executed on the Java Method Server on the Content Server host.
• A middle-tier system.
For example, on an application server to support WDK or to execute server methods.
For client machines, Documentum provides Documentum Foundation Services (DFS) as the primary
support for applications communicating with the Documentum platform.
Note: DFC release notes provides the information on the supported versions of the JVM. These
can change from one minor release to the next.
The EMC Documentum Foundation Classes Installation Guide describes the locations of files that DFC
installs. The config directory contains several files that are important to DFC’s operation.
The Developer section contains tips, sample code, downloads, a component exchange, and other
resources for application developers using DFC, WDK, and other tools.
The dfc.properties file contains the following properties that are mandatory for using a global registry.
• dfc.globalregistry.repository
The name of the repository. The repository must project to a connection broker that DFC has
access to.
• dfc.globalregistry.username
The user name part of the credentials that DFC uses to access the global registry. Global registry
user, page 130 provides information about how to create this user.
• dfc.globalregistry.password
The password part of the credentials that DFC uses to access the global registry. The DFC installer
encrypts the password if you supply it. If you want to encrypt the password yourself, use the
following instruction at a command prompt:
java com.documentum.fc.tools.RegistryPasswordUtils password
You can call the getGlobalRegistryPassword method to get the global registry password.
For example, the following code snippet prints this password:
System.out.println(DfPreferences.getInstance().getGlobalRegistryPassword());
Note: This method returns the global registry password as it is presented in the dfc.properties file.
If the password is encrypted in the file, it is encrypted in the return.
The dfc.properties file also provides an optional property to resist attempts to obtain unauthorized
access to the global registry. For example, the entry
dfc.bof.registry.connect.attempt.interval=60
sets the minimum interval between connection attempts to the default value of 60 seconds.
Performance tradeoffs
You can adjust the amount of time DFC relies on cached information before checking for consistency
between the local cache and the global registry. For example, the entry
dfc.bof.cache.currency_check_interval=60
sets that interval to the default value of 60 seconds. Because global registry information tends to be
relatively static, you might be able to check less frequently in a production environment. On the
other hand, you might want to check more frequently in a development environment. The check is
inexpensive. If nothing has changed, the check consists of looking at one vstamp object. The EMC
Documentum Content Server Object Reference Guide provides information about vstamp objects. The
name of this entry was formerly dfc.bof.cacheconsistency.interval.
Diagnostic settings
DFC provides a number of properties that facilitate diagnosing and solving problems.
Diagnostic mode
DFC can run in diagnostic mode. You can cause this to happen by including the following setting
in dfc.properties:
dfc.diagnostics.resources.enable=true
The set of problems that diagnostic mode can help you correct can change without notice. Here are
some examples of issues detected by diagnostic mode.:
• Session leaks
• Collection leaks
DFC catches the leaks at garbage collection time. If it finds an unreleased session or an unclosed
collection, it places an appropriate message in the log.
Configuring docbrokers
You must set the repeating property dfc.docbroker.host, one entry per docbroker. For example,
dfc.docbroker.host[0]=docbroker1.yourcompany.com
dfc.docbroker.host[1]=docbroker2.yourcompany.com
dfc.data.dir
The dfc.data.dir setting identifies the directory used by DFC to store files. By default, it is a folder
relative to the current working directory of the process running DFC. You can set this to another value.
Tracing options
DFC has extensive tracing support. Trace files can be found in a directory called logsunder
dfc.data.dir. For simple tracing, add the following line to dfc.properties:
dfc.tracing.enable=true
That property traces DFC entry calls, return values and parameters.
For more extensive tracing information, add the following properties to dfc.properties.
dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.include_rpcs=true
These properties include more details and RPCs sent to the server. Start with the simple trace because
the verbose trace produces much more output.
Search options
DFC supports the search capabilities of Federated Search Services (FS2) with a set of properties.
The FS2 installer sets some of these. Most of the others specify diagnostic options or performance
tradeoffs.
Performance tradeoffs
Several properties enable you to make tradeoffs between performance and the frequency with which
DFC executes certain maintenance tasks.
DFC caches the contents of properties files such as dfc.properties or dbor.properties. If you change
the contents of a properties file, the new value does not take effect until DFC rereads that file. The
dfc.config.check_interval property specifies the interval in which DFC rereads the DFC configuration
file changes in seconds. The default value for this property is 30.
DFC periodically reclaims unused resources. The dfc.housekeeping.cleanup.interval property
specifies the interval between cleanups. The default value is 7 days.
Some properties described in the BOF and global registry settings, page 16 and Search options, page
19 sections also provide performance tradeoffs.
Registry emulation
DFC uses the dfc.registry.mode property to keep track of whether to use a file, rather than the
Windows registry, to store certain settings. The DFC installer sets this property to file by default.
When importing or exporting Microsoft OLE documents, DFC can automatically search for linked
items and import them as individual system objects linked as nodes in a virtual document. On export,
the document can be reassembled into a single document with OLE links. In order to enable this
behavior, you must configure the registry emulation mode in dfc.properties.
By default, the setting is:
dfc.registry.mode = file
To enable OLE file handling, set the value to:
dfc.registry.mode = windows
Setting this key to false requires users always to provide a password, even when logging in as the
installation owner.
In past releases, DFC used the DMCL library to communicate with the server and provided support
for integrating DMCL logging and DFC logging. Since DMCL is no longer used, the features to
integrate its tracing into the DFC log are no longer needed.
This chapter describes some common patterns of DFC application code. It contains the following
sections.
• Client/Server model, page 23
• Packages, page 23
• Interfaces, page 24
• Processing a repository object, page 25
• Failover enabled methods, page 27
Client/Server model
The Documentum architecture generally follows the client/server model. DFC-based programs are
client programs, even if they run on the same machine as a Documentum server. DFC encapsulates its
client functionality in the IDfClient interface, which serves as the entry point for DFC code. IDfClient
handles basic details of connecting to Documentum servers.
You obtain an IDfClient object by calling the static method DfClientX.getLocalClient().
An IDfSession object represents a connection with the Documentum server and provides services
related to that session. DFC programmers create new Documentum objects or obtain references to
existing Documentum objects through the methods of IDfSession.
To get a session, first create an IDfSessionManager by calling IDfClient.newSessionManager().
Next, get the session from the session manager using the procedure described in the sections and
session managers below. Chapter Chapter 3, Sessions and Session Managers provides information
about sessions.
Packages
DFC comprises a number of packages, that is, sets of related classes and interfaces.
• The names of DFC Java classes begin with Df (for example, DfCollectionX).
• Names of interfaces begin with IDf (for example, IDfSessionManager).
Interfaces expose DFC’s public methods. Each interface contains a set of related methods. The
Javadocs describe each package and its purpose.
Note:
• The com.documentum.operations package and the IDfSysObject interface in the
com.documentum.fc.client package have some methods for the same basic tasks (for example,
checkin, checkout). In these cases, the IDfSysObject methods are mostly for internal use and
for supporting legacy applications. The methods in the operations package perform the
corresponding tasks at a higher level. For example, they keep track of client-side files and
implement Content Server XML functionality.
• The IDfClientX is the correct interface for accessing factory methods (all of its getXxx methods,
except for those dealing with the DFC version or trace levels).
The DFC interfaces form a hierarchy; some derive methods and constants from others. Use the Tree
link from the home page of the DFC online reference (see DFC online reference documentation, page
16 ) to examine the interface hierarchy. Click any interface to go to its definition.
Each interface inherits the methods and constants of the interfaces above it in the hierarchy. For
example, IDfPersistentObject has a save method. IDfSysObject is below IDfPersistentObject in
the hierarchy, so it inherits the save method. You can call the save method of an object of type
IDfSysObject.
Interfaces
Because DFC is large and complex, and because its underlying implementation is subject to change,
you should use DFC’s public interfaces.
Tip: DFC provides factory methods to instantiate objects that implement specified DFC interfaces. If
you bypass these methods to instantiate implementation classes directly, your programs may fail to
work properly, because the factory methods sometimes do more than simply instantiate the default
implementation class. For most DFC programming, the only implementation classes you should
instantiate directly are DfClientX and the exception classes (DfException and its subclasses).
DFC does not generally support direct access to, replacement of, or extension of its implementation
classes. The principal exception to these rules is the Business Object Framework (BOF). Chapter 7, Using
the Business Object Framework (BOF) provides information about BOF.
IDfPersistentObject
An IDfPersistentObject corresponds to a persistent object in a repository. With DFC you usually
don’t create objects directly. Instead, you obtain objects by calling factory methods that have
IDfPersistentObject as their return type.
Caution: If the return value of a factory method has type IDfPersistentObject, you can cast
it to an appropriate interface (for example, IDfDocument if the returned object implements
that interface). Do not cast it to an implementation class (for example, DfDocument). Doing
so produces a ClassCastException.
2. Obtain an IDfClient object by calling the getLocalClient method of the IDfClientX object. For
example, execute the following Java code:
IDfClient c = cx.getLocalClient();
The IDfClient object must reside in the same process as the Documentum client library, DMCL.
3. Obtain a session manager by calling the newSessionManager method of the IDfClient object.
For example, execute the following Java code:
IDfSessionManager sm = c.newSessionManager();
4. Use the session manager to obtain a session with the repository, that is, a reference to an object
that implements the IDfSession interface. For example, execute the following Java code:
IDfSession s = sm.getSession();
Sharable and Private Sessions, page 30 provides information about the difference between the
getSession and newSession methods of IDfSessionManager.
5. If you do not have a reference to the Documentum object, call an IDfSession method (for example,
newObject or getObjectByQualification) to create an object or to obtain a reference to an existing
object.
6. Use routines of the operations package to manipulate the object, that is, to check it out, check it in,
and so forth. For simplicity, the example below does not use the operations package. (Chapter 6,
Working with Document Operations provides the examples that use operations).
7. Release the session.
try {
IDfDocument document =
(IDfDocument) session.newObject( "dm_document" ); //Step 5
}
finally {
sMgr.release( session ); //Step 7
}
Steps 1 through 4 obtain an IDfSession object, which encapsulates a session for this application
program with the specified repository.
Step 5 creates an IDfDocument object. The return type of the newObject method is IDfPersistentObject.
You must cast the returned object to IDfDocument in order to use methods that are specific to
documents.
Step 6 of the example code sets the document object name and saves it.
Note that the return type of the newObject method is IDfPersistentObject. The program explicitly
casts the return value to IDfDocument, then uses the object’s save method, which IDfDocument
inherits from IDfPersistentObject. This is an example of interface inheritance, which is an important
part of DFC programming. The interfaces that correspond to repository types mimic the repository
type hierarchy.
Step 7 releases the session, that is, places it back under the control of the session manager, sMgr.
The session manager will most likely return the same session the next time the application calls
sMgr.getSession.
Most DFC methods report errors by throwing a DfException object. Java code like that in the above
example normally appears within a try/catch/finally block, with an error handler in the catch block.
Tip: When writing code that calls DFC, it is a best practice to include a finally block to ensure that you
release storage and sessions.
This chapter describes how to get, use, and release sessions, which enable your application to connect
to a repository and access repository objects.
Note: If you are programming in the WDK environment, be sure to refer to Managing Sessions in
EMC Documentum Web Development Kit Development Guide for information on session management
techniques and methods specific to WDK.
This chapter contains the following major sections:
• Sessions, page 29
• Session Managers, page 30
• Getting session managers and sessions, page 30
• Objects disconnected from sessions, page 34
• Related sessions (subconnections), page 34
• Original vs. object sessions, page 35
• Transactions, page 35
• Configuring sessions using IDfSessionManagerConfig, page 36
• Getting sessions using login tickets, page 36
• Principal authentication support, page 46
Sessions
To do any work in a repository, you must first get a session on the repository. A session (IDfSession)
maintains a connection to a repository, and gives access to objects in the repository for a specific
logical user whose credentials are authenticated before establishing the connection. The IDfSession
interface provides a large number of methods for examining and modifying the session itself, the
repository, and its objects, as well as for using transactions (refer to IDfSession in the Javadocs for a
complete reference).
Session Managers
A session manager (IDfSessionManager) manages sessions for a single user on one or more
repositories. You create a session manager using the DfClient.newSessionManager factory method.
The session manager serves as a factory for generating new IDfSession objects using the
IDfSessionManager.newSession method. Immediately after using the session to do work in the
repository, the application should release the session using the IDfSessionManager.release() method
in a finally clause. The session initially remains available to be reclaimed by the session manager
instance that released it, and simultaneously, the connection associated with that session will be
placed to the connection pool where it can be shared.
The IDfSessionManager.getSession method checks for an available shared session, and, if one is
available, uses it instead of creating a new session. This makes for efficient use of content server
connections, which are an extremely expensive resource, in a web programming environment where
a large number of sessions are required.
In this example, you directly instantiate a DfClientX object. Based on that object, you are able to use
factory methods to create the other objects required to interact with the repository.
if (sessionMgr.hasIdentity(repository))
{
sessionMgr.clearIdentity(repository);
}
sessionMgr.setIdentity(repository, loginInfo);
}
Note that setIdentity does not validate the repository name nor authenticate the user credentials.
This normally is not done until the application requests a session using the getSession or newSession
method; however, you can authenticate the credentials stored in the identity without requesting a
session using the IDfSessionManager.authenticate method. The authenticate method, like getSession
and newSession, uses an identity stored in the session manager object, and throws an exception if the
user does not have access to the requested repository.
*/
}
finally
{
mySessMgr.release(mySession);
}
}
Some legacy applications might still be getting sessions from IDfClient directly, which is now
discouraged in favor of the method above. In those cases, you should call IDfSession.disconnect().
You can only release a managed session that was obtained using a factory method of the session
manager; that is, IDfSessionManager.getSession or IDfSessionManager.newSession. Getting a session
in this way implies ownership, and confers responsibility for releasing the session.
If you get a reference to an existing session, which might for example be stored as a data member
of a typed object, no ownership is implied, and you cannot release the session. This would be the
case if you obtained the session using IDfTypedObject.getSession.
The following snippet demonstrates these two cases:
// session is owned
IDfSession session = sessionManager.getSession("myRepository");
IDfSysObject object = session.getObject(objectId);
mySbo.doSomething(object);
sessionManager.release(session);
Once a session is released, you cannot release or disconnect it again using the same session reference.
The following code will throw a runtime exception:
IDfSession session = sessionManager.getSession("myRepository");
sessionManager.release(session);
sessionManager.release(session); // throws runtime exception
Once you have released a session, you cannot use the session reference again to do anything with the
session (such as getting an object).
IDfSession session = sessionManager.getSession("myRepository");
sessionManager.release(session);
session.getObject(objectId); // throws runtime exception
Transactions
DFC supports transactions at the session manager level and at the session level. A transaction
at the session manager level includes operations on any sessions obtained by a thread using
IDfSessionManager.newSession() or IDfSessionManager.getSession after the transaction is started (See
IDfSessionManager.beginTransaction() in the DFC Javadoc) and before it completes the transaction
(see IDfSessionManager.commitTransaction() and IDfSessionManager.abortTransaction()).
A transaction at the session level includes operations on the session that occur after the transaction
begins (see IDfSession.beginTrans()) and occur before it completes (see IDfSession.commitTrans()
and IDfSession.abortTrans()). Previous versions of DFC did not support calling beginTrans() on a
session obtained from a session manager. This restriction has been removed. The code below shows
how a TBO can use a session-level transaction.
public class MyTBO
{
protected void doSave() throws DfException
{
boolean txStartedHere = false;
if ( !getObjectSession().isTransactionActive() )
{
getObjectSession().beginTrans();
txStartedHere = true;
}
try
{
doSomething(); // Do something that requires transactions
if ( txStartedHere )
getObjectSession().commitTrans();
}
finally
{
if ( txStartedHere && getObjectSession().isTransactionActive())
getObjectSession().abortTrans();
}
}
}
EMC Documentum Content Server Administrators Guide provides the information on related server
configuration settings.
To get a session for the user using this login ticket, you pass the ticket in place of the user’s password
when setting the identity for the user’s session manager. The following sample assumes that you
have already instantiated a session manager for the user.
public IDfSession getSessionWithTicket
(String repository, String userName) throws DfException
{
// get a ticket using the preceding sample method
if (userSessionMgr.hasIdentity(repository))
{
userSessionMgr.clearIdentity(repository);
}
userSessionMgr.setIdentity(repository, loginInfo);
User authentication
You can use either Kerberos SSO or Java Enterprise Edition (JEE) principal authentication for user
authentication:
• Kerberos SSO, page 38
• Principal authentication support, page 46
Kerberos SSO
These topics are included:
• Overview, page 39
• Procedure to enable Kerberos SSO, page 39
• Configuring the DFC application’s service principal name and *.keytab file, page 39
• Configuring the DFC application for Kerberos, page 41
Overview
To enable authentication of the DFC application on the Kerberos Key Distribution Center (KDC),
register the DFC application’s service principal name (SPN) on the Active Server KDC using
the Microsoft ktpass utility. A Kerberos SPN uniquely identifies a service that uses Kerberos
authentication. In this case, the service is your DFC application. Executing the ktpass utility also
generates a *.keytab file. The *.keytab file contains name/value pairs consisting of an SPN and
a long-term key derived from a password. Both the DFC application and the KDC must be able to
access the *.keytab file. You copy the *.keytab file to the DFC application machine (the machine
where the Kerberos service ticket (ST) is validated) and specify the location of the *.keytab file
in the JAAS configuration.
Note: Although the *.keytab file is usually used on non-Windows machines, DFC leverages
the *.keytab file to improve network performance by eliminating Kerberos authenticaion
communication between Windows machines and the KDC.
In some cases, you can register the SPNs of more than one DFC application to the same account.
For example, in load-balanced environments support for Kerberos can be achieved by joining all
load-balanced nodes into a single account and assigning a single SPN to the cluster. If access to the
service is required through a different SPN (for example, based on the service host IP address rather
than the load balancer name), then this SPN can also be registered with the same account. The
following procedure describes the main steps for registering an SPN using a one-to-one mapping
between the DFC application’s SPN and user account, or a many-to-one mapping in which multiple
SPNs are registered to one user account.
Note: Make sure to enable delegation trust for the service accounts who create the SPNs.
2. Map the DFC application’s SPN to a user and generate the *.keytab file. Mapping the SPN to a
user name, page 40 provides the detailed information.
You can define the naming convention for Kerberos SPN for Documentum services in the following
ways:
• The standard naming convention that is supported currently:
CS/ <repository-name>@<service-domain-name>
Documentum recommends to use the standard naming convention. The non-standard SPN naming
conventions or formats defined by the users must be supported by the Active Directory.
EMC recommends using a host name rather than an IP address as the host string. For example,
myhost.mydomain.com. realm is the name of the Kerberos realm, which is defined in the
Kerberos configuration file ( Setting up the Kerberos configuration file, page 41 provides the detailed
information).
• To map multiple SPNs to a user name using many-to-one mapping, perform the following
steps:
a. Execute the ktpass utility as follows:
ktpass /pass <password> -out <keytab_file> -princ <SPN>
-crypto <crypto_type> +DumpSalt -ptype KRB5_NT_PRINCIPAL +desOnly
/mapOp set /mapUser <user_name> /target <domain_controller>
Remember the salt string and the key version number (vno) because you need to use
them in Step c.
b. To map the next SPN to the same user account, execute the setspn utility as follows:
setspn -A <SPN> <user_name>
c. Execute ktpass utility for the second SPN without setting with the same user as follows:
Note: Use the salt and key version number (kvno) that were displayed as the output
in Step a.
ktpass /pass <password> -out <keytab_file> -princ <SPN>
-crypto <crypto_type> +DumpSalt -ptype KRB5_NT_PRINCIPAL +desOnly
/mapOp set +RawSalt <salt> -in <keytab_file> -kvno <vno>
To enable Kerberos support for the DFC application, perform these tasks:
• Setting up the Kerberos configuration file, page 41
• Setting up JAAS configuration, page 42
• Performance best practices, page 43
The Kerberos configuration file (typically named krb5.ini) specifies Kerberos settings such as
the KDC address and default realm name. The realm name includes the KDC and administration
server addresses.
[libdefaults]
default_realm = <REALM>
forwardable = true
ticket_lifetime = 24h
clockskew = 72000
default_tkt_enctypes =
default_tgs_enctypes =
[realms]
<REALM> = {
kdc = <kdc_server_ip>
admin_server = <admin_server_ip>
[domain_realm]
<domain> = <REALM>
[logging]
default = c:\kdc.log
kdc = c:\kdc.log
[appdefaults]
autologin = true
forward = true
forwardable = true
encrypt = true
The Java Authentication and Authorization Service (JAAS) configuration file entry contains JAAS
specific settings such as the <LoginContext> name (which is also the name of the configuration
entry), settings for the Kerberos login module, the DFC application’s SPN, and the location of the
*.keytab file. Copy the generated *.keytab file to the location specified in the JAAS configuration
on the DFC application’s machine.
The location and format of the JAAS configuration settings might be different for each application.
You can specify the configuration file setting in a JVM command-line parameter; for example:
-Djava.security.auth.login.config=
In a jaas.conf file (for example, located in C:/jaas.conf), specify the following format for the
JAAS configuration:
<loginContext>
{
<LoginModule> required
debug=true
principal="<SPN>"
realm="<REALM>"
refreshKrb5Config=true
noTGT=true
useKeyTab=true
storeKey=true
doNotPrompt=true
useTicketCache=false
isInitiator=false
keyTab="<dfcuser_keytab_path>";
};
where:
• SPN:
DFC/myhost.mydomain.com:8080@SRV01.
COM
<LoginModule> Specify the Kerberos login module to be used to
perform user authentication:
• For single-domain support only:
com.sun.security.auth.module.
Krb5LoginModule
To make DFC, multi-domain Kerberos perform better, apply the best practices described in this
section.
Note: The best practices and/or test results are derived or obtained after testing the product in the
EMC testing environment. Every effort is made to simulate common customer usage scenarios
during performance testing, but actual performance results will vary due to differences in hardware
and software configurations, data, and other variables.
Performance issues are noticed while using QUEST libraries for Kerberos multiple or single domain
authentication. However, the same performance issues are not noticed with SUN libraries for
Kerberos single domain authentication. This large time lag or performance issue with QUEST
libraries is due to the reason that SUN libraries use UDP as a default protocol to communicate with
KDC, whereas QUEST libraries use TCP as default protocol.
QUEST was initially supporting UDP as a default protocol like SUN and noticed packet dropouts
in their user environments. Hence, QUEST moved to TCP protocol from UDP protocol for better
reliability. EMC recommends to use the QUEST’s default supporting protocol that is TCP.
The performance issue with QUEST libraries (using TCP protocol) can only happen in the networks
where the packet size of Kerberos request (AS-REQ/TGS-REQ) is less than Ethernet packet size
(1472 bytes). If the packet size of Kerberos request is large enough (a full Ethernet packet or larger)
then the performance degrade does not happen.
If you have well behaved reliable network, looking for better performance, and want to avoid
performance overhead occurs due to packet size of Kerberos request when TCP protocol is used by
QUEST libraries, you may want to use UDP protocol to gain performance benefit. Based on the
value provided to the QUEST supported property jcsi.kerberos.maxpacketsize, the QUEST
libraries choose the protocol TCP or UDP for its Kerberos communication. If the packet size of
Kerberos request is more than the value provided to this property, QUEST uses TCP protocol over
UDP. However, it is advised to set this property value to 1460 (close to Ethernet packet size) to
reduce the packet dropout scenarios. If you still notice the packet dropouts in your environment,
use TCP protocol.
Constrained delegation restricts delegation to specific services that are configured on an Active
Directory service account. Constrained delegation uses a series of service tickets instead of the
transmission of the user’s Ticket Granting Tickets (TGT). Using these tickets, it permits delegation
of the client’s identity directed to one or more servers, but only to those specifically selected by
configuration as being permitted to be authentication delegates. The end user delegates credentials to
an intermediate application in a secure manner, and that application impersonates the end user and
performs the authentication. You can use the Kerberos Delegation utility to delegate the credentials.
The utility performs the following actions as input:
• Accepts the source Service Principal Name (SPN) where the request comes in and the target SPN
for which the Service Ticket (ST) is required to be raised.
• Raises an ST for the source SPN.
• Raises a flag to indicate whether mutual authentication is required or not.
The utility generates a ST for the target SPN for the requested delegated user as an output. The utility
also provides API for accessing the underlying Kerberos context required for mutual authentication.
Set the constrained.delegate.enable property to true on the machine where you want to
configure constrained delegation using the Kerberos Delegation utility.
The Kerberos Delegation utility makes use of a configuration file to configure support in the utility
for multiple SPN. Use the following format:
<Unique name of the SPN>
{
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
doNotPrompt=true
storeKey=true
useKeyTab=true
isInitiator=false
keyTab="<full path of the keytab file>"
principal="<Source SPN>";
};
In the diagram, Example A explains a referral relationship, whereas Example B explains a trust
relationship. Both types of relations are the same, where the user in one domain is attempting to
access a service in another domain, but differ in the setup and number of hops. Quest libraries are
used for multi domain support. The configuration files for Quest library are specified below:
<Unique name of the SPN>
{
com.dstc.security.kerberos.jaas.KerberosLoginModule required
refreshKrb5Config=true
useTicketCache=false
doNotPrompt=true
storeKey=true
useKeyTab=true
noTGT=true
keyTab="<full path of the keytab file>""
principal="<Source SPN without the realm name>";"
realm="<realm name>";
};
Perform the following steps to configure a service for constrained delegation on an Active Directory
side.
1. Go to Delegation tab in the User Properties:
• Select Trust this user for delegation to specified services only.
• Under the option Trust this user for delegation to specified services only, select the Use
Kerberos only.
2. Click Add to add a service for constrained delegation.
3. In the Add Services dialog box, click Users or Computers.
4. Enter the object name for constrained delegation and click Check Names and click OK.
The services to be added for constrained delegation are displayed in the Add services dialog box.
5. Select the service and click OK.
6. Click Apply on the Delegation tab.
7. Click OK.
To implement principal support, you must provide your own implementation of the
IDfPrincipalSupport interface and, optionally, of the IDfTrustManager interface. The sole method
to implement in IDfPrincipalSupport is getSession, which takes as arguments the repository and
principal names, and returns an IDfSession. The purpose of this method is to obtain a session for
the principal (that is, the user) on the specified repository using a login ticket generated using the
login credentials of a trusted authenticator (who is a superuser).
public IDfSession getSession
(String docbaseName,String principalName) throws DfPrincipalException
The intent of the IDfTrustManager interface is to obtain login credentials for the trusted authenticator
from a secure data source and return them in an IDfLoginInfo object to be used internally by the
IDfPrincipalSupport object. This is done in the IDfTrustManager.getTrustCredential method:
public IDfLoginInfo getTrustCredential(String docbaseName)
To obtain sessions using DFC principal support, use the IDfClient.setPrincipalSupport method,
passing it an instance of your IDfPrincipalSupport implementation class. This changes the behavior
of any session manager generated from the IDfClient, so that it will delegate the task of obtaining
the session to the IDfPrincipalSupport.getSession method. You then set the principal name in the
session manager using the setPrincipalName method. Thereafter, when a session is requested from
this session manager instance, it will use the principal support object getSession method to obtain a
session for the principal on a specified repository.
The following sample demonstrates the basic principal support mechanism.
public static void demoPrincipalIdentity(String repository, String principalName)
throws Exception
{
// This assumes that your trust manager implementation
// gets and stores the superuser credentials when it is constructed.
IDfTrustManager trustManager = new YourTrustManager();
try
{
System.out.println("Got session: " + session.getSessionId());
System.out.println("Username: " + session.getLoginInfo().getUser());
}
// Release the session in a finally clause.
finally
{
sessMgr.release(session);
}
}
The primary use of DFC is to add business logic to your applications. The presentation layer is built
using the Web Development Kit, most often via customization of the Webtop interface. Building a
custom UI on top of DFC requires a great deal of work, and will largely recreate effort that has
already been done for you.
While you should not create a completely new interface for your users, it can be helpful to have a small
application that you can use to work with the API directly and see the results. With that in mind, here
is a rudimentary interface class that will enable you to add and test behavior using the Operation API.
These examples were created with Oracle JDeveloper, and feature its idiosyncratic ways of building
the UI. You can use any IDE you prefer, using this example as a guideline.
To the extent possible, the sample classes have been created as self-contained entities. However, a
best practice is to instantiate a Session Manager when you first open an application and continue
to use it until the application stops. Therefore, the logic for creating a Session Manager has been
incorporated as a method of the DfcTutorialFrame class.
This chapter contains the following sections:
• The DfcTutorialFrame class, page 49
• The DfcTutorialApplication class, page 54
• The TutorialGetDirectory class, page 55
• The TutorialSessionManager class, page 56
• Running the tutorial application, page 57
Note: If you belong to the EMC Developer Network, you can download these files from the following
website:
https://community.emc.com/community/edn/documentum
You can search for the document DFC Tutorial Source Files for 6.5 and 6.6 or use the tags dfc and
samplecode.
import com.documentum.fc.client.IDfCollection;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfTypedObject;
import com.documentum.fc.common.IDfId;
import java.awt.Dimension;
import java.awt.List;
import java.awt.Rectangle;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.StringTokenizer;
import java.util.Vector;
import javax.swing.ActionMap;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JList;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.swing.JTextField;
import javax.swing.SwingConstants;
public DfcTutorialFrame() {
try {
jbInit();
} catch (Exception e) {
e.printStackTrace();
}
}
this.getContentPane().add(jTextField_userName, null);
this.getContentPane().add(jLabel_userName, null);
this.getContentPane().add(jLabel_repositoryName, null);
this.getContentPane().add(jTextField_repositoryName, null);
}
if(list_id.getSelectedItem()!=null) {
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
IDfId idObj = mySession.getIdByQualification(
"dm_sysobject where r_object_id='" + objectIdString + "'");
IDfSysObject sysObj = (IDfSysObject) mySession.getObject(idObj);
if(sysObj.getTypeName().equals("dm_folder")) {
jTextField_cwd.setText(directory + "/" + list_id.getSelectedItem());
getDirectory();
StringTokenizer st = new StringTokenizer(jTextField_cwd.getText(),"/");
m_breadcrumbs.removeAllElements();
while(st.hasMoreTokens()) m_breadcrumbs.add(st.nextToken());
StringBuffer newDirectory = new StringBuffer("");
for (int i=0; i<m_breadcrumbs.elementAt(i));
}
jTextField_cwd.setText(newDirectory.toString());
}
else {
getDirectory();
}
}
else {
getDirectory();
}
}
catch (Exception ex) {
jLabel_messages.setText("Exception has been thrown: " + ex);
ex.printStackTrace();
}
finally {
mySessMgr.releaseSession(mySession);
}
}
import java.awt.Dimension;
import java.awt.Toolkit;
import javax.swing.JFrame;
import javax.swing.UIManager;
import com.documentum.fc.client.IDfCollection;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
try {
mySessMgr = new TutorialSessionManager(repository, userName, password);
mySession = mySessMgr.getSession();
IDfFolder myFolder = mySession.getFolderByPath(directory);
fileList = myFolder.getContents(null);
}
catch (Exception ex) {
ex.printStackTrace();
}
finally {
mySessMgr.releaseSession(mySession);
}
return fileList;
}
}
import com.documentum.com.DfClientX;
import com.documentum.fc.client.IDfClient;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfLoginInfo;
// Set the identity of the session manager object based on the repository
// name and login information.
sMgr.setIdentity(m_repository, loginInfo);
// Return the populated session manager to the calling class. The session
// manager object now has the required information to connect to the
// repository, but is not actively connected.
return sMgr;
}
This example demonstrated how you can create an application that connects to the content server and
retrieves information. Many of the remaining examples in this manual provide the code for a button
handler that you can add to this test application to call classes that demonstrate implementation
of individual behaviors using DFC.
This chapter describes how to use DFC calls to work with objects stored in the Documentum
repository.
When you are working with the metadata associated with a repository object, using DFC methods
directly can be more efficient and quicker to deploy. When you are working with document content,
a better choice is to work with document operations, covered in Working with Document Operations.
This chapter contains the following main sections:
• Understanding repository objects, page 59
• Creating a cabinet, page 60
• Creating a folder, page 62
• Creating a document object, page 64
• Accessing attributes, page 66
• Setting attributes, page 70
• Removing an attribute value, page 78
• Getting object content, page 81
• Destroying an object, page 82
• Real-time content deletion, page 84
Creating a cabinet
Before you begin manipulating repository objects, you might want to create your own cabinet to work
in. A cabinet is a top level container object used to hold folders. The cabinet is, in fact, a type of
folder, and is created using the interface IDfFolder. Setting its object type to “dm_cabinet” gives it
additional features, including the ability to exist as a top-level object.
You can create a test button for creating a cabinet in the DFC Object Tutorial Frame.
To add the Create Cabinet button to the DFC Base Tutorial Frame
1. Create a JTextField control named jTextField_cabinetName.
2. Create a JButton control named jButton_makeCabinet.
3. Add a button handler method for Make Cabinet.
4. Create the class TutorialMakeCabinet.
The source code for the button handler and TutorialMakeCabinet class follow.
if (!cabinetName.equals(""))
{
TutorialMakeCabinet tmc = new TutorialMakeCabinet();
if (tmc.makeCabinet(
m_sessionManager,
repositoryName,
cabinetName)
)
{
jTextField_cwd.setText("/" + cabinetName);
getDirectory();
jLabel_messages.setText(
"Created cabinet " + cabinetName + "."
);
}
else
{
jLabel_messages.setText("Cabinet creation failed.");
}
}
else
{
jLabel_messages.setText("Enter a unique cabinet name " +
"to create a new cabinet.");
}
}
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
Creating a folder
Creating a folder is similar to creating a cabinet. The essential differences are that you will create a
dm_folder object and identify the parent cabinet or folder in which you’ll create it.
repositoryName,
folderName,
parentName)
)
{
getDirectory();
jLabel_messages.setText("Created folder " + folderName +
".");
}
else
{
jLabel_messages.setText("Folder creation failed.");
}
}
else
{
jLabel_messages.setText("Enter a folder name and current working" +
" directory to create a new folder.");
}
}
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
}
catch (Exception ex)
{
ex.printStackTrace();
return false;
}
finally
{
sessionManager.release(mySession);
}
}
public static final String DM_FOLDER = "dm_folder";
}
)
{
jLabel_messages.setText("Created document " + documentName + ".");
getDirectory();
}
else
{
jLabel_messages.setText("Document creation failed.");
}
}
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
Accessing attributes
Having created a document or folder, you can examine its attributes by performing a “dump” of all
attributes or by specifying a particular attribute you want to examine.
Dumping Attributes
The most convenient way to begin working with attributes is to “dump” all of the object attributes to
get a list of their names and values.
if (list_id.getSelectedIndex() == -1)
{
jLabel_messages.setText(
"Select an item in the file list to dump attributes."
);
}
else
{
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
TutorialDumpAttributes tda = new TutorialDumpAttributes();
jTextArea_results.setText(
tda.dumpAttributes(
m_sessionManager,
repositoryName,
objectIdString)
);
jLabel_messages.setText("Query complete.");
}
}
import com.documentum.fc.client.IDfContainment;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
{
public TutorialDumpAttributes()
{
}
To add a Get Attribute By Name button to the DFC Base Tutorial Frame
1. Create a JTextField control named jTextField_attributeName
2. Create a JButton control named jButton_getAttributeByName
3. Add a handler method for the Get Attribute By Name button.
4. Create and deploy the class TutorialGetAttributeByName
if (list_id.getSelectedIndex() == -1 |
jTextField_attributeName.getText().equals(""))
{
jLabel_messages.setText(
"Select an item and enter an attribute name to get" +
" attribute information."
);
}
else
{
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
String attributeName = jTextField_attributeName.getText();
TutorialGetAttributeByName tgabn =
new TutorialGetAttributeByName();
jTextArea_results.setText(
tgabn.getAttribute(
m_sessionManager,
repositoryName,
objectIdString,
attributeName
)
);
jLabel_messages.setText("Query complete.");
}
}
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.IDfId;
{
ex.printStackTrace();
attribute = new StringBuffer("");
attribute.append(ex.toString());
}
// Always, always, always release the session when finished using it.
finally
{
sessionManager.release(mySession);
}
return attribute.toString();
}
}
To add a Get Attribute By Number button to the DFC Base Tutorial Frame
1. Create a JTextField control named jTextField_attributeNumber
2. Create a JButton control named jButton_getAttributeByNumber
3. Add a handler method for the Get Attribute By Number button.
4. Create and deploy the class TutorialGetAttributeByNumber
if (list_id.getSelectedIndex() == -1 |
jTextField_attributeNum.getText().equals(""))
{
jLabel_messages.setText(
"Select an item and enter an attribute number to get" +
"attribute information."
);
}
else
{
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
int theIndex = Integer.parseInt(jTextField_attributeNum.getText());
TutorialGetAttributeByNumber tga =
new TutorialGetAttributeByNumber();
jTextArea_results.setText(
tga.getAttribute(
m_sessionManager,
repositoryName,
objectIdString,
theIndex
)
);
jLabel_messages.setText("Query complete.");
}
}
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.IDfId;
Setting attributes
Attributes are set according to their datatype. For example, you set a String value by calling the
following method.
IDfTypedObject.setString(String attributeName, String value)
The available datatypes are DFC versions of Boolean, integer, String, time, DfID, and undefined. In
practice, you will not work with the DfID because all object IDs are set internally and are not mutable
by custom applications.
Time values (including date values) are stored as numeric values, and are entered or displayed using
a pattern mask. For example, in these examples, we use the pattern IDfTime.DF_TIME_PATTERN_2,
mm/dd/yyyy. The complete list of time formats can be found in the Javadoc entry for IDfTime.
To add a Set Attribute By Name button to the DFC Base Tutorial Frame
1. If you have not done so already, create a JTextField control named jTextField_attributeName.
2. Create a JTextField control named jTextField_attributeValue
3. Create a JButton control named jButton_setAttributeByName.
4. Add a button handler method for Set Attribute By Name.
5. Create and deploy the class TutorialSetAttributeByName
if (list_id.getSelectedIndex() == -1 |
jTextField_attributeName.getText().equals(""))
{
jLabel_messages.setText(
"Select an item, enter an attribute name and value to set" +
" attribute information.");
}
else
{
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
TutorialSetAttributeByName tsabn =
new TutorialSetAttributeByName();
jTextArea_results.setText(
tsabn.setAttributeByName(
m_sessionManager,
repositoryName,
objectIdString,
attributeName,
attributeValue)
);
jLabel_messages.setText("Set attribute complete.");
}
}
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfType;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.common.IDfTime;
case IDfType.DF_INTEGER:
sysObj.setInt(attributeName,
Integer.parseInt(attributeValue));
break;
case IDfType.DF_STRING:
sysObj.setString(attributeName, attributeValue);
break;
case IDfType.DF_ID:
IDfId newId = new DfId(attributeValue);
sysObj.setId(attributeName, newId);
break;
case IDfType.DF_TIME:
DfTime newTime =
new DfTime(attributeValue, IDfTime.DF_TIME_PATTERN2);
sysObj.setTime(attributeName, newTime);
break;
case IDfType.DF_UNDEFINED:
sysObj.setString(attributeName, attributeValue);
break;
}
if (sysObj.fetch(null))
{
results = new StringBuffer("Object is no longer current.");
}
else
{
sysObj.save();
results.append("\nNew value: " + attributeValue);
}
return results.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return "Set attribute command failed.";
}
finally
{
sessionManager.release(mySession);
}
}
}
To add a Set Attribute By Number button to the DFC Base Tutorial Frame
1. If you have not done so already, create a JTextField control named jTextField_attributeNumber.
2. If you have not done so already, create a JTextField control named jTextField_attributeValue
3. Create a JButton control named jButton_setAttributeByNumber.
4. Add a button handler method for Set Attribute By Number.
5. Create and deploy the class TutorialSetAttributeByNumber
if (list_id.getSelectedIndex() == -1 |
jTextField_attributeNum.getText().equals(""))
{
jLabel_messages.setText(
"Select an item, enter an attribute name and value to set" +
" attribute information."
);
}
else
{
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
TutorialSetAttributeByNumber tsabn =
new TutorialSetAttributeByNumber();
jTextArea_results.setText(
tsabn.setAttributeByNumber(
m_sessionManager,
repositoryName,
objectIdString,
attributeNumber,
attributeValue
)
);
jLabel_messages.setText("Set attribute complete.");
}
}
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfType;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.common.IDfTime;
try
{
mySession = sessionManager.getSession(repositoryName);
IDfId idObj =
mySession.getIdByQualification(
"dm_sysobject where r_object_id='" +
objectIdString + "'"
);
case IDfType.DF_INTEGER:
sysObj.setInt(attributeName,
Integer.parseInt(attributeValue));
break;
case IDfType.DF_STRING:
sysObj.setString(attributeName, attributeValue);
break;
case IDfType.DF_TIME:
DfTime newTime =
new DfTime(attributeValue, IDfTime.DF_TIME_PATTERN2);
sysObj.setTime(attributeName, newTime);
break;
case IDfType.DF_UNDEFINED:
sysObj.setString(attributeName, attributeValue);
break;
}
if (sysObj.fetch(null))
{
results = new StringBuffer("Object is no longer current.");
}
else
{
sysObj.save();
results.append("\nNew value: " + attributeValue);
}
return results.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return "Set attribute command failed.";
}
finally
{
sessionManager.release(mySession);
}
}
}
To add an Append Repeating Attribute button to the DFC Base Tutorial Frame
1. If you have not already done so, create a JTextField control named jTextField_attributeName.
2. If you have not already done so, create a JTextField control named jTextField_attributeValue.
3. Create a JButton control named jButton_appendRepeatingAttribute
4. Add a button handler method for Append Repeating Attribute.
5. Create and deploy the class TutorialAppendRepeatingAttribute
if (list_id.getSelectedIndex() == -1 |
jTextField_attributeName.getText().equals(""))
{
jLabel_messages.setText(
"Select an item, enter an attribute name, index, and value " +
"to set repeating attribute information."
);
}
else
{
String objectIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
TutorialAppendRepeatingAttribute tara =
new TutorialAppendRepeatingAttribute();
tara.appendRepeatingAttribute(
m_sessionManager,
repositoryName,
objectIdString,
attributeName,
attributeValue
);
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfType;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.common.IDfTime;
switch (attributeDatatype)
{
case IDfType.DF_BOOLEAN:
if (attributeValue.equals("F") |
attributeValue.equals("f") |
attributeValue.equals("0") |
attributeValue.equals("false") |
attributeValue.equals("FALSE"))
sysObj.setBoolean(attributeName, false);
if (attributeValue.equals("T") |
attributeValue.equals("t") |
attributeValue.equals("1") |
attributeValue.equals("true") |
attributeValue.equals("TRUE"))
sysObj.appendBoolean(attributeName, true);
results.append("\nNew value: " + attributeValue);
break;
case IDfType.DF_INTEGER:
sysObj.appendInt(attributeName,
Integer.parseInt(attributeValue));
break;
case IDfType.DF_STRING:
sysObj.appendString(attributeName, attributeValue);
break;
case IDfType.DF_TIME:
DfTime newTime =
new DfTime(attributeValue, IDfTime.DF_TIME_PATTERN2);
if (newTime.isValid())
sysObj.appendTime(attributeName, newTime);
else
results = new StringBuffer("");
break;
case IDfType.DF_UNDEFINED:
sysObj.appendString(attributeName, attributeValue);
break;
}
sysObj.save();
return results.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return "Set attribute command failed.";
}
finally
{
sessionManager.release(mySession);
}
}
}
need to use more robust decision-making logic, but the API calls will be the same once you’ve
determined which attributes are to be removed.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.IDfId;
String attributeName,
String attributeIndex
)
{
IDfSession mySession = null;
}
return true;
}
catch (Exception ex)
{
ex.printStackTrace();
return false;
}
finally
{
sessionManager.release(mySession);
}
}
}
jTextArea_results.setText(
tgtc.getContent(
m_sessionManager,
repositoryName,
objectIdString
)
);
}
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.IDfId;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.InputStreamReader;
try
{
mySession = sessionManager.getSession(repositoryName);
Destroying an object
You can use the IDfSystemObject.destroyAllVersions() method to permanently remove an object
from the database. If you use the IDfPersistentObject.destroy() method, you will destroy only the
specific system object corresponding to the r_object_id your provide. In this example, we use the
destroyAllVersions() method, which destroys not only the system object with the corresponding ID
but all iterations of the object.
If you attempt to destroy a directory that has children, the method will return an error.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.IDfId;
This chapter describes the way to use DFC to perform the most common operations on documents.
Most information about documents also applies to the broader category of repository objects
represented by the IDfSysObject interface.
The chapter contains the following main sections:
• Understanding documents, page 85
• Understanding operations, page 87
• Types of operation, page 88
• Basic steps for manipulating documents, page 89
• Operations for manipulating documents, page 92
• Handling document manipulation errors, page 124
• Operations and transactions, page 126
Understanding documents
The Documentum Content Server Fundamentals manual explains Documentum facilities for managing
documents. This section provides a concise summary of what you need to know to understand the
remainder of this chapter.
Documentum maintains a repository of objects that it classifies according to a type hierarchy. For this
discussion, SysObjects are at the top of the hierarchy. A document is a specific kind of SysObject. Its
primary purpose is to help you manage content.
Documentum maintains more than one version of a document. A version tree is an original document
and all of its versions. Every version of the document has a unique object ID, but every version has
the same chronicle ID, namely, the object ID of the original document.
Virtual documents
A virtual document is a container document that includes one or more objects, called components,
organized in a tree structure. A component can be another virtual document or a simple document. A
virtual document can have any number of components, nested to any level. Documentum imposes no
limit on the depth of nesting in a virtual document.
Documentum uses two sets of terminology for virtual documents. In the first set, a virtual document
that contains a component is called the component’s parent, and the component is called the virtual
document’s child. Children, or children of children to any depth, are called descendants.
Note: Internal variables, Javadoc comments, and registry keys sometimes use the alternate spelling
descendent.
The second set of terminology derives from graph theory, even though a virtual document forms a
tree, and not an arbitrary graph. The virtual document and each of its descendants is called a node.
The directed relationship between a parent node and a child node is called an edge.
In both sets of terminology, the original virtual document is sometimes called the root.
You can associate a particular version of a component with the virtual document (this is called
early binding) or you can associate the component’s entire version tree with the virtual document.
The latter allows you to select which version to include at the time you construct the document
(this is called late binding).
Documentum provides a flexible set of rules for controlling the way it assembles documents. An
assembly is a snapshot of a virtual document. It consists of the set of specific component versions that
result from assembling the virtual document according to a set of binding rules. To preserve it, you
must attach it to a SysObject: usually either the root of the virtual document or a SysObject created to
hold the assembly. A SysObject can have at most one attached assembly.
You can version a virtual document and manage its versions just as you do for a simple document.
Deleting a virtual document version also removes any containment objects or assembly objects
associated with that version.
When you copy a virtual document, the server can make a copy of each component, or it can create an
internal reference or pointer to the source component. It maintains information in the containment
object about which of these possibilities to choose. One option is to require the copy operation
to specify the choice.
Whether it copies a component or creates a reference, Documentum creates a new containment
object corresponding to that component.
Note: DFC allows you to process the root of a virtual document as an ordinary document. For
example, suppose that doc is an object of type IDfDocument and also happens to be the root of a
virtual document. If you tell DFC to check out doc, it does not check out any of the descendants. If
you want DFC to check out the descendants along with the root document, you must first execute an
instruction like
IDfVirtualDocument vDoc =
doc.asVirtualDocument(CURRENT, false)
If you tell DFC to check out vDoc, it processes the current version of doc and each of its descendants.
The DFC Javadocs explain the parameters of the asVirtualDocument method.
Documentum represents the nodes of virtual documents by containment objects and the nodes of
assemblies by assembly objects. An assembly object refers to the SysObject to which the assembly is
attached, and to the virtual document from which the assembly came.
If an object appears more than once as a node in a virtual document or assembly, each node has a
separate associated containment object or assembly object. No object can appear as a descendant of
itself in a virtual document.
XML Documents
Documentum’s XML support has many features. Information about those subjects appears in EMC
Documentum Content Server Fundamentals and in the EMC Documentum XML Application Development
Guide.
Using XML support requires you to provide a controlling XML application. When you import an
XML document, DFC examines the controlling application’s configuration file and applies any
chunking rules that you specify there.
If the application’s configuration file specifies chunking rules, DFC creates a virtual document
from the chunks it creates. It imports other documents that the XML document refers to as entity
references or links, and makes them components of the virtual document. It uses attributes of the
containment object associated with a component to remember whether it came from an entity or a
link and to maintain other necessary information. Assembly objects have the same XML-related
attributes as containment objects do.
Understanding operations
Operations are used to manipulate documents in Documentum. Operations provide interfaces
and a processing environment to ensure that Documentum can handle a variety of documents and
collections of documents in a standard way. You obtain an operation of the appropriate kind, place
one or more documents into it, and execute the operation.
All of the examples in this chapter pertain only to documents, but operations can be used to work
with objects of type IDfSysObject, not just the subtype IDfDocument.
For example, to check out a document, take the following steps:
1. Obtain a checkout operation.
2. Add the document to the operation.
3. Execute the operation.
DFC carries out the behind-the-scenes tasks associated with checking out a document. For a
virtual document, for example, DFC adds all of its components to the operation and ensures that
links between them are still valid when it stores the documents into the checkout directory on
the file system. It corrects filename conflicts, and it keeps a local record of which documents it
checks out. This is only a partial description of what DFC does when you check out a document.
Because of the number and complexity of the underlying tasks, DFC wraps seemingly elementary
document-manipulation tasks in operations.
Note: A SysObject in the operations framework is processed only once. If you add the same object
twice to an operation, it only performs the operation steps once per object. For example, if you add a
document object multiple times in order to retrieve multiple content pages with one operation, you
will only retrieve one page of content. In order to retrieve multiple content pages (for example,
renditions), you must use a separate operation for each operation.
An IDfClientX object provides factory methods for creating operations. Once you have an IDfClientX
object (say cX) and a SysObject (say doc) representing the document, the code for the checkout
looks like this:
In your own applications, you would add code to handle a null returned by the add method or
errors produced by the execute method.
Types of operation
DFC provides operation types and corresponding nodes (to be explained in subsequent sections)
for many tasks you perform on documents or, where appropriate, files or folders. The following
table summarizes these.
Each operation factory method of IDfClientX instantiates an operation object of the corresponding
type. For example, getImportOperation factory method instantiates an IDfImportOperation object.
Different operations accept different parameters to control the way they carry out their tasks. Some
parameters are optional, some mandatory.
Note: You must use the setSession method of IDfImportOperation or IDfXMLTransformOperation to
set a repository session before adding nodes to either of these types of operation.
An operation contains a structure of nodes and descendants. When you obtain the operation, it
has no nodes. When you use the operation’s add method to include documents in the operation, it
creates new root nodes. The add method returns the node as an IDfOperationNode object. You must
cast it to the appropriate operation node type to use any methods the type does not inherit from
IDfOperationNode (see Working with nodes, page 91).
Note: If the add method cannot create a node for the specified document, it returns a null argument.
Be sure to test for this case, because it does not usually throw an exception.
DFC might include additional nodes in the operation. For example, if you add a repository folder,
DFC adds nodes for the documents linked to that folder, as children of the folder’s node in the
operation.
Each node can have zero or more child nodes. If you add a virtual document, the add method creates
as many descendant nodes as necessary to create an image of the virtual document’s structure
within the operation.
You can add objects from more than one repository to an operation.
You can use a variety of methods to obtain and step through all nodes of the operation (see Working
with nodes, page 91 ). You might want to set parameters on individual nodes differently from the
way you set them on the operation.
The operations package processes the objects in an operation as a group, possibly invoking many
DFC calls for each object. Operations encapsulate Documentum client conventions for registering,
naming, and managing local content files.
DFC executes the operation in a predefined set of steps, applying each step to all of the documents in
the operation before proceeding to the next step. It processes each document in an operation only
once, even if the document appears at more than one node.
Once DFC has executed a step of the operation on all of the documents in the operation, it cannot
execute that step again. If you want to perform the same task again, you must construct a new
operation to do so.
Normally, you use the operation’s execute method and let DFC proceed through the execution steps.
DFC provides a limited ability for you to execute an operation in steps, so that you can perform
special processing between steps. Documentum does not recommend this approach, because the
number and identity of steps in an operation may change with future versions of DFC. If you have
a programming hurdle that you cannot get over without using steps, work with Documentum
Technical Support or Consulting to design a solution.
If DFC encounters an error while processing one node in an operation, it continues to process the
other nodes. For example, if one object in a checkout operation is locked, the operation checks out
the others. Only fatal conditions cause an operation to throw an exception. DFC catches other
exceptions internally and converts them into IDfOperationError objects. The getErrors method
returns an IDfList object containing those errors, or a null if there are no errors. The calling program
can examine the errors, and decide whether to undo the operation, or to accept the results for those
objects that did not generate errors.
Once you have checked the errors you may want to examine and further process the results of the
operation. The next section, Working with nodes, page 91, shows how to access the objects and
results associated with the nodes of the operation.
This section shows how to access the objects and results associated with the nodes of an operation.
Note: Each operation node type (for example, IDfCheckinNode) inherits most of its methods from
IDfOperationNode.
The getChildren method of an IDfOperationNode object returns the first level of nodes under the
given node. You can use this method recursively to step through all of the descendant nodes.
Alternatively, you can use the operation’s getNodes method to obtain a flat list of descendant nodes,
that is, an IDfList object containing of all of its descendant nodes without the structure.
These methods return nodes as objects of type IDfOperationNode, not as the specific node type (for
example, IDfCheckinNode).
The getId method of an IDfOperationNode object returns a unique identifier for the node, not the
object ID of the corresponding document. IDfOperationNode does not have a method for obtaining the
object ID of the corresponding object. Each operation node type (for example, IDfCheckinNode) has
its own getObjectID method. You must cast the IDfOperationNode object to a node of the specific
type before obtaining the object ID.
Checking out
The execute method of an IDfCheckoutOperation object checks out the documents in the operation.
The checkout operation:
• Locks the document
• Copies the document to your local disk
• Always creates registry entries to enable DFC to manage the files it creates on the file system
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckoutNode;
import com.documentum.operations.IDfCheckoutOperation;
try
{
mySession = sessionManager.getSession(repositoryName);
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the location where the local copy of the checked out file
// is stored.
coOp.setDestinationDirectory("C:\\");
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfCheckoutNode;
import com.documentum.operations.IDfCheckoutOperation;
// Instantiate a session.
IDfSession mySession = sessionManager.getSession (repositoryName);
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the location where the local copy of the checked out file is stored.
coOp.setDestinationDirectory("C:\\");
Checking in
The execute method of an IDfCheckinOperation object checks documents into the repository. It
creates new objects as required, transfers the content to the repository, and removes local files if
appropriate. It checks in existing objects that any of the nodes refer to (for example, through XML
links).
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckinNode;
import com.documentum.operations.IDfCheckinOperation;
IDfId idObj =
mySession.getIdByQualification(
"dm_sysobject where r_object_id='" + docId + "'"
);
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the version increment. In this case, the next major version
// ( version + 1)
cio.setCheckinVersion(IDfCheckinOperation.NEXT_MAJOR);
// After the item is created, you can get it immediately using the
// getNewObjectId method.
To check in a document, you pass an object of type IDfSysObject or IDfVirtualDocument, not the file on
the local file system, to the operation’s add method. In the local client file registry, DFC records the path
and filename of the local file that represents the content of an object. If you move or rename the file,
DFC loses track of it and reports an error when you try to check it in.
Setting the content file, as in IDfCheckinNode.setFilePath, overrides DFC’s saved information.
If you specify a document that is not checked out, DFC does not check it in. DFC does not treat
this as an error.
You can specify checkin version, symbolic label, or alternate content file, and you can direct DFC to
preserve the local file.
If between checkout and checkin you remove a link between documents, DFC adds the orphaned
document to the checkin operation as a root node, but the relationship between the documents no
longer exists in the repository.
Executing a checkin operation normally results in the creation of new objects in the repository. If
opCheckin is the IDfCheckinOperation object, you can obtain a complete list of the new objects
by calling
IDfList list = opCheckin.getNewObjects();
The list contains the object IDs of the newly created SysObjects.
In addition, the IDfCheckinNode objects associated with the operation are still available after you
execute the operation (see Working with nodes, page 91 ). You can use their methods to find out many
other facts about the new SysObjects associated with those nodes.
In Documentum 6.5 and later, if a save or checkin of an object fails, DFC reverts the object
automatically, before returning the object to the application issuing the save or checkin. Reverting the
object removes any changes made to the object before the attempted save or checkin. The application
must then reapply any changes made before the save or checkin before reattempting the save or
checkin operation. This behavior is different from how earlier DMCL releases handled failed saves or
checkins. The DMCL in prior releases simply marked the object with an error state and returned
it to the application.
Cancelling checkout
The execute method of an IDfCancelCheckoutOperation object cancels the checkout of documents by
releasing locks, deleting local files if appropriate, and removing registry entries.
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
operation node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
try
{
mySession = sessionManager.getSession(repositoryName);
// Populate the cancel checkout node and add it to the cancel checkout
// operation.
node = (IDfCancelCheckoutNode) cco.add(doc);
// Check to see if the node is null - this will not throw an error.
if (node == null)
{
return "Node is null";
}
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
operation node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
node = (IDfCancelCheckoutNode)cco.add(vdoc);
}
else
{
node = (IDfCancelCheckoutNode)cco.add(doc);
}
// Check to see if the node is null - this will not throw an error.
if (node==null) {return "Node is null";}
Importing
The execute method of an IDfImportOperation object imports files and directories into the repository.
It creates objects as required, transfers the content to the repository, and removes local files if
appropriate. If any of the nodes of the operation refer to existing local files (for example, through
XML or OLE links), it imports those into the repository too.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.common.DfException;
import com.documentum.operations.IDfFile;
import com.documentum.operations.IDfImportNode;
import com.documentum.operations.IDfImportOperation;
try
{
mySession = sessionManager.getSession(repositoryName);
// Create the import node, adding the file to the import operation.
IDfImportNode node = (IDfImportNode) opi.add(file);
if (node == null)
return ("Node is null.");
Use the object’s setSession method to specify a repository session and the object’s
setDestinationFolderId method to specify the repository cabinet or folder into which the operation
should import documents.
You must set the session before adding files to the operation.
You can set the destination folder, either on the operation or on each node. The node setting overrides
the operation setting. If you set neither, DFC uses its default destination folder.
You can add an IDfFile object or specify a file system path. You can also specify whether to keep the
file on the file system (the default choice) or delete it after the operation is successful.
If you add a file system directory to the operation, DFC imports all files in that directory and proceeds
recursively to add each subdirectory to the operation. The resulting repository folder hierarchy
mirrors the file system directory hierarchy.
You can also control version labels, object names, object types and formats of the imported objects.
If you are importing a document with OLE links, all of the linked files can be imported for you
automatically and stored as nodes in a virtual document. Microsoft Object Linking and Embedding
(OLE), page 20 provides information on configuring your system to accommodate OLE links.
XML processing
You can import XML files without doing XML processing. If nodeImport is an IDfImportNode object,
you can turn off XML processing on the node and all its descendants by calling
nodeImport.setXMLApplicationName("Ignore");
Turning off this kind of processing can shorten the time it takes DFC to perform the operation.
Executing an import operation results in the creation of new objects in the repository. If opImport is
the IDfImportOperation object, you can obtain a complete list of the new objects by calling
IDfList list = opImport.getNewObjects();
The list contains the object IDs of the newly created SysObjects.
In addition, the IDfImportNode objects associated with the operation are still available after you
execute the operation (see Working with nodes, page 91). You can use their methods to find out many
other facts about the new SysObjects associated with those nodes. For example, you can find out
object IDs, object names, version labels, file paths, and formats.
Exporting
The execute method of an IDfExportOperation object creates copies of documents on the local file
system. If the operation’s add method receives a virtual document as an argument, it also adds all
of the document’s descendants (determined by applying the applicable binding rules), creating a
separate node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfFormat;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfExportNode;
import com.documentum.operations.IDfExportOperation;
try
{
mySession = sessionManager.getSession(repositoryName);
// Create an export node, adding the document to the export operation object.
IDfExportNode node = (IDfExportNode) eo.add(doc);
Copying
The execute method of an IDfCopyOperation object copies the current versions of documents or
folders from one repository location to another.
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
node of the operation for each.
If the add method receives a folder (unless you override this default behavior), it also adds all
documents and folders linked to that folder. This continues recursively until the entire hierarchy of
documents and subfolders under the original folder is part of the operation. The execute method
replicates this hierarchy at the target location.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCopyNode;
import com.documentum.operations.IDfCopyOperation;
String repositoryName,
String docId,
String destination
)
{
IDfSession mySession = null;
StringBuffer sb = new StringBuffer("");
try
{
mySession = sessionManager.getSession(repositoryName);
// Create a copy node, adding the document to the copy operation object.
IDfCopyNode node = (IDfCopyNode) co.add(doc);
Moving
The execute method of an IDfMoveOperation object moves the current versions of documents or folders
from one repository location to another by unlinking them from the source location and linking them
to the destination. Versions other than the current version remain linked to the original location.
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
node for each.
If the add method receives a folder (unless you override this default behavior), it adds all documents
and folders linked to that folder. This continues recursively until the entire hierarchy of documents
and subfolders under the original folder is part of the operation. The execute method links this
hierarchy to the target location.
// Create a move node, adding the document to the move operation object.
IDfMoveNode node = (IDfMoveNode)mo.add(doc);
else {
return "Move operation failed.";
}
}
}
Follow the steps in Steps for manipulating documents, page 89. Options for moving are essentially
the same as for copying.
If the operation entails moving a checked out document, DFC leaves the document unmodified
and reports an error.
Deleting
The execute method of an IDfDeleteOperation object removes documents and folders from the
repository.
If the operation’s add method receives a virtual document as an argument, it also adds all of
the document’s descendants (determined by applying the applicable binding rules), creating a
separate node for each. You can use the enableDeepDeleteVirtualDocumentsInFolders method of
IDfDeleteOperation to override this behavior.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.client.IDfVirtualDocumentNode;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfDeleteNode;
import com.documentum.operations.IDfDeleteOperation;
String repositoryName,
String docId,
Boolean currentVersionOnly
)
{
IDfSession mySession = null;
try
{
mySession = sessionManager.getSession(repositoryName);
// Set the deletion policy. You must do this prior to adding nodes to the
// Delete operation.
if (currentVersionOnly)
{
// Default value is SELECTED_VERSIONS
// delo.setVersionDeletionPolicy(
// IDfDeleteOperation.SELECTED_VERSIONS
// );
}
else
{
delo.setVersionDeletionPolicy(
IDfDeleteOperation.ALL_VERSIONS
);
}
if (node == null)
return "Node is null.";
Follow the steps in Steps for manipulating documents, page 89. If the operation entails deleting a
checked out document, DFC leaves the document unmodified and reports an error.
Predictive caching
Predictive caching can help you to improve the user experience by sending system objects to Branch
Office Caching Services servers before they are requested by users. For example, a company-wide
report could be sent to all repository caches when it is added to the local repository rather that
waiting for a user request on each server. Another use for this capability would be to cache an
object in response to an advance in a workflow procedure, making the document readily available
for the next user in the flow.
void transformXML2HTMLUsingStylesheetObject(
IDfClientX clientx, // Factory for operations
IDfSession session, // Repository session (required)
IDfDocument docStylesheet ) // XSL stylesheet in repository
throws DfException, IOException
{
// Obtain transformation operation
IDfXMLTransformOperation opTran = clientx.getXMLTransformOperation();
void transformXML2HTMLUsingStylesheetFile(
IDfClientX clientx, // Factory for operations
IDfSession session, // Repository session (required)
IDfId idDestFolder ) // Destination folder
throws DfException
{
// Obtain transformation operation
IDfXMLTransformOperation opTran = clientx.getXMLTransformOperation();
void transformXML2HTMLRendition(
IDfClientX clientx, // Factory for operations
IDfSession session, // Repository session (required)
IDfDocument docXml, // Root of the XML document
IDfDocument docStylesheet ) // XSL stylesheet in repository
throws DfException
{
// Obtain transformation operation
IDfXMLTransformOperation opTran = clientx.getXMLTransformOperation();
opTran.setSession( session );
opTran.setTransformation( docStylesheet );
DFC creates a rendition because the output format differs from the input format and you did not call
optran.setDestination to specify an output directory.
After you execute an operation, you can use its getErrors method to retrieve an IDfList object
containing the errors. You must cast each to IDfOperationError to read its error message.
After detecting that the operation’s execute method has returned errors, you can use the operation’s
abort method to undo as much of the operation as it can. You cannot undo XML validation or
transform operations, nor can you restore deleted objects.
// Initialize variables
String strNodeName = "";
String strNodeId = "";
String strSucceeded = "";
IDfId idNodesObj = null;
IDfOperationError error = null;
IDfOperationNode node = null;
You can monitor an operation for progress and errors. Create a class that implements the
IDfOperationMonitor interface and register it by calling the setOperationMonitor method of
IDfOperation. The operation periodically notifies the operation monitor of its progress or of errors
that it encounters.
During execution, DFC calls the methods of the installed operation monitor to report progress or
errors. You can display this information to an end user. In each case DFC expects a response that tells
it whether or not to continue. You can make this decision in the program or ask an end user to decide.
Your operation monitor class must implement the following methods:
• progressReport
DFC supplies the percentage of completion of the operation and of its current step. DFC expects a
response that tells it whether to continue or to abort the operation.
• reportError
DFC passes an object of type IDfOperationError representing the error it has encountered. It
expects a response that tells it whether to continue or to abort the operation.
• getYesNoAnswer
This is the same as reportError, except that DFC gives you more choices. DFC passes an object
of type IDfOperationError representing the error it has encountered. It expects a response
of yes, no, or abort.
The Javadocs explain these methods and arguments in greater detail.
You can undo most operations by calling an operation’s abort method. The abort method is specific
to each operation, but generally undoes repository actions and cleans up registry entries and local
content files. Some operations (for example, delete) cannot be undone.
If you know an operation only contains objects from a single repository, and the number of objects
being processed is small enough to ensure sufficient database resources, you can wrap the operation
execution in a session transaction.
You can also include operations in session manager transactions. Session manager transactions can
include operations on objects in different repositories, but you must still pay attention to database
resources. Session manager transactions are not completely atomic, because they do not use a
two-phase commit. Transactions, page 35 provides information about what session transactions can
and cannot do.
This chapter introduces the Business Object Framework (BOF). It contains the following major
sections:
• Overview of BOF, page 127
• BOF infrastructure, page 128
• Service-based Business Objects (SBOs), page 130
• Type-based Business Objects (TBOs), page 137
• Calling TBOs and SBOs, page 151
• Calling DFS, JAXB API, or other components from a BOF module, page 156
• Aspects, page 157
Overview of BOF
BOF’s main goals are to centralize and standardize the process of customizing Documentum
functionality. BOF centralizes business logic within the framework. Using BOF, you can develop
business logic that
• Always executes, regardless of the client program
• Can extend the implementation of core Documentum functionality
• Runs well in concert with an application server environment
In order to achieve this, the framework leaves customization of the user interface to the clients.
BOF customizations embody business logic and are independent of considerations of presentation
or format.
If you develop BOF customizations and want to access them from the .NET platform, you must take
additional steps. We do not provide tools to assist you in this. You can, however, expose some custom
functionality as web services. You can access web services from a variety of platforms (in particular,
.NET). The EMC Documentum Web Services Framework Development Guide provides information about
deploying and using the web services.
BOF infrastructure
This section describes the infrastructure that supports the Business Object Framework.
Note: Earlier versions of DFC maintained a registry of TBOs and SBOs on each client machine. That
registry is called the Documentum business object registry (DBOR). The DBOR form of registry is
deprecated, but for now you can still use it, even in systems that contain repository based registries.
Where both are present, DFC gives preference to the repository based registry.
Global registry
DFC delivers SBOs from a central repository. That repository’s registry is called the global registry.
The global registry user, who has the user name of dm_bof_registry, is the repository user whose
account is used by DFC clients to connect to the repository to access required service-based objects
or network locations stored in the global registry. This user has Read access to objects in the
/System/Modules, /System/BocsConfig, /dm_bof_registry, and /System/NetworkLocations only,
and no other objects.
The identity of the global registry is a property of the DFC installation. Different DFC installations
can use different global registries, but a single DFC installation can have only one global registry.
In addition to efficiency, local caching provides backup if the global registry is unavailable.
Only when the client application tries to instantiate an SBO for the first time, the corresponding
implementation and interface jar files are downloaded from the global registry to the local cache.
Thereafter, DFC reloads only those parts of the module that change. For that purpose, DFC checks for
updates related to the cached modules whenever an application tries to use them or after the interval
specified in seconds for the DFC property dfc.bof.cache.currency_check_interval in the dfc.properties
file. Its default value is 60 seconds.
The dfc.properties file contains properties that relate to accessing the global registry. BOF and global
registry settings, page 16 provides information about using these properties.
SBO introduction
A service based object (SBO) is a type of module designed to enable developers to access Documentum
functionality by writing small amounts of relevant code. The underlying framework handles most
of the details of connecting to Documentum repositories. SBOs are similar to session beans in an
Enterprise JavaBean (EJB) environment.
SBOs can operate on multiple object types, retrieve objects unrelated to Documentum objects (for
example, external email messages), and perform processing. You can use SBOs to implement
functionality that applies to more than one repository type. For example, a Documentum Inbox
object is an SBO. It retrieves items from a user’s inbox and performs operations like removing and
forwarding items.
You can use SBOs to implement utility functions to be called by multiple TBOs. A TBO has the
references it needs to instantiate an SBO.
You can implement an SBO so that an application server component can call the SBO, and the SBO
can obtain and release repository sessions dynamically as needed.
SBOs are the basis for the web services framework.
SBO architecture
An SBO associates an interface with an implementation class. Each folder under
/System/Modules/SBO corresponds to an SBO. The name of the folder is the name of the SBO, which
by convention is the name of the interface.
SBOs are not associated with a repository type, nor are they specific to the repository in which they
reside. As a result, each DFC installation uses a global registry (see Global registry, page 129). The
dfc.properties file contains the information necessary to enable DFC to fetch SBO implementation
classes from the global registry.
You instantiate SBOs with the newService method of IDfClient, which requires you to pass it a session
manager. The newService method searches the registry for the SBO and instantiates the associated
Java class. Using its session manager, an SBO can access objects from more than one repository.
You can easily design an SBO to be stateless, except for the reference to its session manager.
Note: DFC does not enforce a naming convention for SBOs, but we recommend that you follow the
naming convention explained in Follow the Naming Convention, page 136.
Implementing SBOs
This section explains how to implement an SBO.
An SBO is defined by its interface. Callers cannot instantiate an SBO’s implementation class directly.
The interface should refer only to the specific functionality that the SBO provides. A separate
interface, IDfService, provides access to functionality common to all SBOs. The SBO’s implementation
class, however, should not extend IDfService. Instead, the SBO’s implementation class must extend
DfService, which implements IDfService. Extending DfService ensures that the SBO provides several
methods for revealing information about itself to DFC and to applications that use the SBO.
To create an SBO, first specify its defining interface. Then create an implementation class that
implements the defining interface and extends DfService. DfService is an abstract class that defines
common methods for SBOs.
Override the following abstract methods of DfService to provide information about your SBO:
• getVersion returns the current version of the service as a string.
The version is a string and must consist of an integer followed by up to three instances of dot
integers (for example, 1.0 or 2.1.1.36). The version number is used to determine installation
options.
• getVendorString returns the vendor’s copyright statement (for example, "Copyright 1994-2005
EMC Corporation. All rights reserved.") as a string.
• isCompatible checks whether the class is compatible with a specified service version
This allows you to upgrade service implementations without breaking existing code. Java does
not support multiple versions of interfaces.
• supportsFeature checks whether the string passed as an argument matches a feature that the
SBO supports.
The getVersion and isCompatible methods are important tools for managing SBOs in an open
environment. The getVendorString method provides a convenient way for you to include your
copyright information. The supportsFeature method can be useful if you develop conventions for
naming and describing features.
SBO programming differs little from programming for other environments. The following sections
address the principal additional considerations.
SBOs can maintain state between calls, but they are easier to deploy to multithreaded and other
environments if they do not do so. For example, a checkin service needs parameters like retainLock
and versionLabels. A stateful interface for such a service provides get and set methods for such
parameters. A stateless interface makes you pass the state as calling arguments.
This section presents session manager related considerations for implementing SBOs.
Overview
When implementing an SBO, you normally use the getSession and releaseSession methods of
DfService to obtain a DFC session and return it when finished. Once you have a session, use the
methods of IDfSession and other DFC interfaces to implement the SBO’s functionality.
If you need to access the session manager directly, however, you can do so from any method
of a service, because the session manager object is a member of the DfService class. The
getSessionManager method returns this object. To request a new session, for example, use the session
manager’s newSession method.
Each SBO method that obtains a repository session must release the session when it is finished
accessing the repository. The following example shows how to structure a method to ensure that it
releases its session, even if exceptions occur:
public void doSomething( String strRepository, . . . ) {
IDfSession session = getSession ( strRepository );
try { /* do something */ }
catch( Exception e ) { /* handle error */ }
finally { releaseSession( session ); }
}
To obtain a session, an SBO needs a repository name. To provide the repository name, you can
design your code in any of the following ways:
• Pass the repository name to every service method.
This allows a stateless operation. Use this approach whenever possible.
• Store the repository name in an instance variable of the SBO, and provide a method to set it
(for example, setRepository (strRepository)).
This makes the repository available from all of the SBO’s methods.
• Extract the repository name from an object ID.
A method that takes an object ID as an argument can extract the repository name from the object
ID (use the getDocbaseNameFromId method of IDfClient).
The EMC | Documentum architecture enables SBOs to return persistent objects to the calling
program. Persistent objects normally maintain their state in the associated session object. But an SBO
must release the sessions it uses before returning to the calling program. At any time thereafter, the
session manager might disconnect the session, making the state of the returned objects invalid.
The calling program must ensure that the session manager does not disconnect the session until the
calling program no longer needs the returned objects.
Another reason for preserving state between SBO calls occurs when a program performs a query or
accesses an object. It must obtain a session and apply that session to any subsequent calls requiring
authentication and Content Server operations. For application servers, this means maintaining the
session information between HTTP requests.
The main means of preserving state information are setSessionManager and transactions.
Maintaining state in a session manager, page 48 describes the setSessionManager mechanism and its
cost in resources. Using Transactions With SBOs, page 134 provides details about using transactions
with SBOs.
You can also use the DfCollectionEx class to return a collection of typed objects from a service.
DfCollectionEx locks the session until you call its close method.
For testing or performance tuning you can examine such session manager state as reference
counters, the number of sessions, and repositories currently connected. Use the getStatistics method
of IDfSessionManager to retrieve an IDfSessionManagerStatistics object that contains the state
information. The statistics object provides a snapshot of the session manager’s internal data as
of the time you call getStatistics. DFC does not update this object if the session manager’s state
subsequently changes.
The DFC Javadocs describe the available state information.
DFC supports two transaction processing mechanisms: session based and session manager based.
Transactions, page 35 describes the differences between the two transaction mechanisms. You cannot
use session based transactions within an SBO method. DFC throws an exception if you try to do so.
Use the following guidelines for transactions within an SBO:
• Never begin a transaction if one is already active.
The isTransactionActive method returns true if the session manager has a transaction active.
• If the SBO does not begin the transaction, do not use commitTransaction or abortTransaction
within the SBO’s methods.
If you need to abort a transaction from within an SBO method, use the session manager’s
setTransactionRollbackOnly method instead, as described in the next paragraph.
When you need the flow of a program to continue when transaction errors occur, use the session
manager’s setTransactionRollbackOnly. Thereafter, DFC silently ignores attempts to commit the
transaction. The owner of the transaction does not know that one of its method calls aborted the
transaction unless it calls the getTransactionRollbackOnly method, which returns true if some part of
the program ever called setTransactionRollbackOnly. Note that setTransactionRollbackOnly does not
throw an exception, so the program continues as if the batch process were valid.
The following program illustrates this.
void serviceMethodThatRollsBack( String strRepository, IDfId idDoc )
throws DfNoTransactionAvailableException, DfException {
try {
IDfPersistentObject obj = session.getObject( idDoc );
obj.checkout()
modifyObject( obj );
obj.save();
}
catch( Exception e ) {
setTransactionRollbackOnly();
throw new DfException();
}
}
When more than one thread is involved in session manager transactions, calling beginTransaction
from a second thread causes the session manager to create a new session for the new thread.
The session manager supports transaction handling across multiple services. It does not disconnect
or release sessions while transactions are pending.
For example, suppose one service creates folders and a second service stores documents in these
folders. To make sure that you remove the folders if the document creation fails, place the two
service calls into a transaction. The DFC session transaction is bound to one DFC session, so it
is important to use the same DFC session across the two services calls. Each service performs its
own atomic operation. At the start of each operation, they request a DFC session and at the end
they release this session back to the session pool. The session manager holds on to the session as
long as the transaction remains open.
Use the beginTransaction method to start a new transaction. Use the commitTransaction or
abortTransaction method to end it. You must call getSession after you call beginTransaction, or the
session object cannot participate in the transaction.
Use the isTransactionActive method to ask whether the session manager has a transaction active that
you can join. DFC does not allow nested transactions.
The transaction mechanism handles the following issues:
• With multiple threads, transaction handling operates on the current thread only.
For example, if there is an existing session for one thread, DFC creates a new session for the
second thread automatically. This also means that you cannot begin a transaction in one thread
and commit it in a second thread.
• The session manager provides a separate session for each thread that calls beginTransaction.
For threads that already have a session before the transaction begins, DFC creates a new session.
• When a client starts a transaction using the beginTransaction method, the session manager does
not allow any other DFC-based transactions to occur.
The following example illustrates a client application calling two services that must be inside a
transaction, in which case both calls must succeed, or nothing changes:
sMgr.setIdentity(repo, loginInfo);
IMyService1 s1 = (IMyService1)
client.newService(IMyService1.class.getName(), sMgr);
IMyService2 s2 = (IMyService2)
client.newService(IMyService2.class.getName(), sMgr);
s1.setRepository( strRepository1 );
s2.setRepository( strRepository2 ) ;
sMgr.beginTransaction();
try {
s1.doRepositoryUpdate();
s2.doRepositoryUpdate();
sMgr.commitTransaction();
}
catch (Exception e) {
sMgr.abortTransaction();
}
If either of these service methods throws an exception, the program bypasses commit and executes
abort.
Each of the doRepositoryUpdate methods calls the session manager’s getSession method.
Note that the two services in the example are updating different repositories. Committing or aborting
the managed transaction causes the session manager to commit or abort transactions with each
repository.
Session manager transactions involving more than one repository have an inherent weakness that
arises from their reliance on the separate transaction mechanisms of the databases underlying the
repositories. Transactions, page 35 provides information about what session manager transactions
can and cannot do.
DFC does not enforce a naming convention for SBOs, but we recommend that you give an SBO
the same name as the fully qualified name of the interface it implements. For example, if you
produce an SBO that implements an interface called IContentValidator, you might name it
com.myFirm.services.IContentValidator. If you do this, the call to instantiate an SBO becomes
simple. For example, to instantiate an instance of the SBO that implements the IContentValidator
interface, simply write
IContentValidator cv = (IContentValidator)client.newService(
IContentValidator.class.getName(), sMgr);
The only constraint DFC imposes on SBO names is that names must be unique within a registry.
Instantiate a new SBO each time you need one, rather than reusing one. Calling SBOs, page 151
provides the details.
Make SBOs as close to stateless as possible. Stateful and stateless SBOs, page 132 provides the details.
DFC caches persistent repository data. There is no convenient way to keep a private cache
synchronized with the DFC cache, so rely on the DFC cache, rather than implementing a separate
cache as part of your service’s implementation.
Creating a TBO
The following sections describe how to create a TBO. Here is a summary of the steps required:
1. Create a custom repository type.
2. Create the TBO interface.
3. Create the TBO class.
4. Implement the methods of IDfBusinessObject.
5. Code your business logic by adding new methods and overriding methods of the parent class.
The following sections provide more detailed instructions for building TBOs. The sample code
provided is works from the assumption that the TBO is derived from the DfDocument class, and that
its purpose is to extend the behavior of the custom object on checkin and save.
Using Composer, create and configure your custom type. Deploying the SBO and TBO, page 156
provides an example implementation.
Creating an interface for the TBO is generally recommended, but optional if you do not intend to
extend the parent class of the TBO by adding new methods. If you only intend to override methods
inherited from the parent class, there is no strict need for a TBO interface, but use of such an interface
may make your code more self-documenting, and make it easier to add new methods to the TBO
should you have a need to add them in the future.
The design of the TBO interface should be determined by which methods you want to expose to client
applications and SBOs. If your TBO needs to expose new public methods, declare their signatures in
the TBO interface. Two other questions to consider are (1) whether to extend the interface of the TBO
superclass (e.g. IDfDocument), and (2) whether to extend IDfBusinessObject.
While the TBO class will need to extend the base DFC class (for example DfDocument), you may want
to make the TBO interface more restricted by redeclaring only those methods of the base class that
your business logic requires you to expose to clients. This avoids polluting the custom interface with
unnecessary methods from higher-level DFC interfaces. On the other hand, if your TBO needs to
expose a large number of methods from the base DFC class, it may be more natural to have the TBO
interface extend the interface of the superclass. This is a matter of design preference.
Although not a functional requirement of the BOF framework, it is generally accepted practice for the
TBO interface to extend IDfBusinessObject, merging into the TBO’s contract its concerns as a business
object with its concerns as a persistent object subtype. This enables you to get an instance of the TBO
class and call IDfBusinessObject methods without the complication of a cast to IDfBusinessObject:
IMySop mySop = (IMySop) session.getObject(id);
if (mySop.supportsFeature("some_feature"))
{
mySop.mySopMethod();
}
The following sample TBO interface extends IDfBusinessObject and redeclares a few required
methods of the TBO superclass (rather than extending the interface of the superclass):
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.client.IDfBusinessObject;
/**
* TBO interface intended to override checkout and save behaviors of
* IDfDocument. IDfDocument is not extended because only a few of its
* methods are required IDfBusinessObject is extended to permit calling
* its methods without casting the TBO instance to IDfBusinessObject
*/
public interface IMySop extends IDfBusinessObject
{
public boolean isCheckedOut() throws DfException;
public void checkout() throws DfException;
public IDfId checkin(boolean fRetainLock, String versionLabels)
throws DfException;
public void save() throws DfException;
}
The main class for your TBO is the class that will be associated with a custom repository object type
when deploying the TBO. This class will normally extend the DFC type class associated with the
repository type from which your custom repository type is derived. For example, if your custom
repository type my_sop extends dm_document, extend the DfDocument class. In this case the TBO
class must implement IDfBusinessObject (either directly or by implementing your custom TBO
interface that extends IDfBusinessObject) and it must implement IDfDynamicInheritance.
public class MySop extends DfDocument implements IMySop,
IDfDynamicInheritance
It is also an option to create more hierarchical levels between your main TBO class and the
DFC superclass. For example, you may want to place generic methods used in multiple TBOs
in an abstract class. In this case the higher class will extend the DFC superclass and implement
IDfDynamicInheritance, and the main TBO class would extend the abstract class and implement the
TBO interface. This will result in the correct runtime behavior for dynamic inheritance.
public abstract class MyGenericDoc extends DfDocument
implements IDfDynamicInheritance
public class MySop extends MyGenericDoc implements IMySop
Note that in this situation you would need to package both MyGenericDoc and MySop into the TBO
class jar file, and specify MySop as the main TBO class when deploying the TBO in the repository.
Deploying the SBO and TBO, page 156 provides an example of packaging and deploying business
objects.
To fulfill its contract as a class of type IDfBusinessObject, the TBO class must implement the following
methods:
• getVersion
• getVendorString
• isCompatible
• supportsFeature
The version support features getVersion and isCompatible must have functioning implementations
(these are required and used by the Business Object Framework) and it is important to keep the TBO
version data up-to-date. Functional implementation of the supportsFeature method is optional: you
can provide a dummy implementation that just returns a Boolean value.
The IDfBusinessObject in the Javadocs provides the detailed information.
getVersion method
The getVersion method must return a string representing the version of the business object, using
the format <major version>.<minor version> (for example 1.10), which can be extended to include as
many as four total integers, separated by periods (for example 1.10.2.12).
getVendorString method
The getVendorString method returns a string containing information about the business object
vendor, generally a copyright string.
isCompatible method
The isCompatible method takes a String argument in the format <major version>.<minor version> (for
example 1.10), which can be extended to include as many as four total integers, separated by periods
(for example 1.10.2.12). The isCompatible method, which is intended to be used in conjunction with
getVersion, must return true if the TBO is compatible with the version and false if it is not.
supportsFeature method
The supportsFeature method is passed a string representing an application or service feature, and
returns true if this feature is supported and false otherwise. Its intention is to allow your application
to store lists of features supported by the TBO, perhaps allowing the calling application to switch
off features that are not supported.
Support for features is an optional adjunct to mandatory version compatibility support. Features are
a convenient way of advertising functionality that avoids imposing complicated version checking on
the client. If you choose not to use this method, your TBO can provide a minimal implementation of
supportsFeature that just returns a boolean value.
You can implement business logic in your TBO by adding new methods, or by adding overriding
methods of the class that your TBO class extends. When overriding methods, you will most likely
want to add custom behavior as pre- or postprocessing before or after a call to super.<methodName>.
The following sample shows an override of the IDfSysObject.doCheckin method that writes an
entry to the log.
protected IDfId doCheckin(boolean fRetainLock,
String versionLabels,
String oldCompoundArchValue,
String oldSpecialAppValue,
String newCompoundArchValue,
String newSpecialAppValue,
Object[] extendedArgs) throws DfException
{
Date now = new Date();
DfLogger.warn(this, now + " doCheckin() called", null, null);
if (!mySop.isCheckedOut())
{
mySop.checkout();
}
if (mySop.isCheckedOut())
{
mySop.checkin(false, "MOD_TEST_FILE");
}
}
catch (Throwable e)
{
fail("Failed with exception " + e);
}
finally
{
if ((sessionManager != null) && (docbaseSession != null))
{
sessionManager.release(docbaseSession);
}
}
}
Dynamic inheritance
Dynamic inheritance is a BOF mechanism that modifies the class inheritance of a TBO dynamically at
runtime, driven by the hierarchical relationship of associated repository objects. This mechanism
enforces consistency between the repository object hierarchy and the associated class hierarchy. It
also allows you to design polymorphic TBOs that inherit from different superclasses depending on
runtime dynamic resolution of the class hierarchy.
For example, suppose you have the following TBO design, in which repository objects are related
hierarchically, but in which the associated TBO classes each inherit from DFDocument:
The dynamic inheritance mechanism allows you to design reusable components that exhibit different
behaviors at runtime inherited from their dynamically determined superclass. For example, in the
following design-time configuration, the MyDoc class is packaged in two TBOs: one in which it is
associated with type my_sop, and one in which it is associated with type my_report:
At runtime, MyDoc will inherit from GenericSop where it is associated with the my_sop repository
object type, and from GenericReport where it is associated with the my_report repository object type.
Methods of DfSysObject
IDfId doAddESignature (String userName, String password, String signatureJustification, String
formatToSign, String hashAlgorithm, String preSignatureHash, String signatureMethodName,
String applicationProperties, String passThroughArgument1, String passThroughArgument2,
Object[] extendedArgs) throws DfException
void doSetPath (String fileName, String formatName, int pageNumber, String otherFile, Object[]
extendedArgs) throws DfException
void doSuspend (String state, boolean override, boolean fTestOnly, Object[] extendedArgs) throws
DfException
void doUnfreeze (boolean thawComponents, Object[] extendedArgs) throws DfException
void doUnlink (String folderSpec, Object[] extendedArgs) throws DfException
void doUnmark (String versionLabels, Object[] extendedArgs) throws DfException
void doUnRegisterEvent (String event, Object[] extendedArgs) throws DfException
void doUpdatePart (IDfId containmentId, String versionLabel, double orderNumber, boolean
useNodeVerLabel, boolean followAssembly, int copyChild, String containType, String containDesc,
Object[] extendedArgs) throws DfException
void doUseACL (String aclType, Object[] extendedArgs) throws DfException
void doVerifyESignature (Object[] extendedArgs) throws DfException
Methods of DfPersistentObject
void doAppendString (String attrName, String value, Object[] extendedArgs) throws DfException
String doGetString (String attrName, int valueIndex, Object[] extendedArgs) throws DfException
void doInsertString (String attrName, int valueIndex, String value, Object[] extendedArgs) throws
DfException
doSetString (String attrName, int valueIndex, String value, Object[] extendedArgs) throws
DfException
void doRemove (String attrName, int beginIndex, int endIndex, Object[] extendedArgs) throws
DfException
Methods of DfGroup
boolean doAddGroup (String groupName, Object[] extendedArgs) throws DfException
Calling SBOs
This section provides rules and guidelines for instantiating SBOs and calling their methods.
The client application should instantiate a new SBO each time it needs one, rather than reusing one.
For example, to call a service during an HTTP request in a web application, instantiate the service,
execute the appropriate methods, then abandon the service object.
This approach is thread safe, and it is efficient, because it requires little resource overhead. The
required steps to instantiate a service are:
1. Prepare an IDfLoginInfo object containing the necessary login information.
2. Instantiate a session manager object.
3. Call the service factory method.
An SBO client application uses the newService factory method of IDfClient to instantiate a service:
public IDfService newService ( String name, IDfSessionManager sMgr )
throws DfServiceException;
The method takes the service name and a session manager as parameters, and returns the service
interface, which you must cast to the specific service interface. The newService method uses the
service name to look up the Java implementation class in the registry. It stores the session manager as
a member of the service, so that the service implementation can access the session manager when it
needs a DFC session.
Calling TBOs
Client applications and methods of SBOs can use TBOs. Use a factory method of IDfSession to
instantiate a TBO’s class. Release the session when you finish with the object.
Within a method of an SBO, use getSession to obtain a session from the session manager. DFC
releases the session when the service method is finished, making the session object invalid.
Use the setSessionManager method to transfer a TBO to the control of the session manager when
you want to:
• Release the DFC session but keep an instance of the TBO.
• Store the TBO in the SBO state.
Maintaining state in a session manager, page 48 provides information about the substantial costs of
using the setSessionManager method.
ITutorialSBO
Create an interface for the service-based object. This interface provides the empty setFlavorSBO
method, to be overridden in the implementation class. All SBOs must extend the IDfService interface.
import com.documentum.fc.client.IDfService;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;
TutorialSBO
The TutorialSBO class extends the DfService class, which provides fields and methods to provide
common functionality for all services.
import com.documentum.fc.client.DfService;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;
// Custom method. This method sets a string value on the system object.
// You can set any number of values of any type (for example, int, double,
// boolean) using similar methods.
ITutorialTBO
The interface for the TBO is trivial — its only function is to extend the IDfBusinessObject interface,
a requirement for all TBOs.
import com.documentum.fc.client.IDfBusinessObject;
TutorialTBO
The TutorialTBO is the class that pulls the entire example together. This class overrides the doSave(),
doSaveEx() and doCheckin() methods of DfSystemObject and uses the setFlavorSBO() method of
TutorialSBO to add a string value to objects of our custom type.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.DfDocument;
import com.documentum.fc.client.IDfClient;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfId;
/**
* simple TBO that overrides the behavior of save(), saveLock() and checkinEx()
*/
if( strFeatures.indexOf( s ) == -1 )
return false;
return true;
}
/*
* Overridden IDfSysObject methods. These methods intercept the save(),
* saveLock(), and checkinEx() methods, use the local setFlavor method
* to attach a String value to the current system object, then pass
* control to the parent class to complete the operation.
*/
try
{
ClassLoader classClassLoader = this.getClass().getClassLoader();
Thread.currentThread().setContextClassLoader(classClassLoader);
// call dfs service here
}
catch (ServiceException e)
{
// catch block
}
finally
{
Thread.currentThread().setContextClassLoader(oldContextClassLoader);
}
Aspects
Aspects are a mechanism for adding behavior and/or attributes to a Documentum object instance
without changing its type definition. They are similar to TBOs, but they are not associated with
any one document type. Aspects also are late-bound rather than early-bound objects, so they can
be added to an object or removed as needed.
Aspects are a BOF type (dmc_aspect_type). Like other BOF types, they have these characteristics:
• Aspects are installed into a repository.
• Aspects are downloaded on demand and cached on the local file system.
• When the code changes in the repository, aspects are automatically detected and new code is
“hot deployed” to the DFC runtime.
Examples of usage
One use for aspects would be to attach behavior and attributes to objects at a particular time in their
lifecycle. For example, you might have objects that represent customer contact records. When a
contact becomes a customer, you could attach an aspect that encapsulates the additional information
required to provide customer support. This way, the system won’t be burdened with maintenance of
empty fields for the much larger set of prospective customers.
If you defined levels of support, you might have an additional level of support for “gold” customers.
You could define another aspect reflecting the additional behavior and fields for the higher level of
support, and attach them as needed.
Another scenario might center around document retention. For example, your company might
have requirements for retaining certain legal documents (contracts, invoices, schematics) for a
specific period of time. You can attach an aspect that will record the date the document was created
and the length of time the document will have to be retained. This way, you are able to attach the
retention aspect to documents regardless of object type, and only to those documents that have
retention requirements.
You will want to use aspects any time you are introducing cross-type functionality. You can use them
when you are creating elements of a common application infrastructure. You can use them when
upgrading an existing data model and you want to avoid performing a database upgrade. You can
use them any time you are introducing functionality on a per-instance basis.
Creating an aspect
Aspects are created in a similar fashion to other BOF modules.
1. Decide what your aspect will provide: behavior, attributes, or both.
2. Create the interface and implementation classes. Write any new behavior, override existing
behavior, and provide getters and setters to your aspect attributes.
3. Deploy the aspect module using the Composer tool. The EMC Documentum Composer User Guide
provides the detailed information.
As an example, we’ll walk through the steps of implementing a simple aspect. Our aspect is designed
to be attached to a document that stores contact information. The aspect identifies the contact as a
customer and indicates the level of service (three possible values — customer, silver, gold). It will also
track the expiration date of the customer’s subscription.
Define the new behavior for your aspect in an interface. In this case, we’ll add getters and setters for
two attributes: service_level and expiration_date.
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfTime;
Now that we have our interface, we can implement it with a custom class.
import com.documentum.fc.client.DfDocument;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfTime;
import dfctestenvironment.ICustomerServiceAspect;
import java.util.GregorianCalendar;
ex.printStackTrace();
return "Exception thrown: " + ex.toString();
}
}
// Getters and setters for the expiration_date and service_level custom attributes.
public IDfTime getExpirationDate() throws DfException
{
return getTime("customer_service_aspect.expiration_date");
}
public String getServiceLevel() throws DfException
{
return getString("customer_service_aspect.service_level");
}
public void setExpirationDate(IDfTime expirationDate) throws DfException {
setTime("customer_service_aspect.expiration_date", expirationDate);
}
public void setServiceLevel (String serviceLevel) throws DfException {
setString("customer_service_aspect.service_level", serviceLevel);
}
}
The EMC Documentum Foundation Classes Release Notes Version 6 provides details on deploying aspect
modules.
TestCustomerServiceAspect
Once you have compiled and deployed your aspect classes and defined the aspect on the server, you
can use the class to set and get values in the custom aspect, and to test the behavior for adjusting the
expiration date by month. This example is compatible with the sample environment described in
Chapter Chapter 4, Creating a Test Application.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.aspect.IDfAspects;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfTime;
String serviceLevel,
String expirationDate
)
{
// Instantiate a client.
IDfClientX clientx = new DfClientX();
try {
String result = "";
result = ((ICustomerServiceAspect)doc).extendExpirationDate(months);
return result;
}
catch (Exception ex) {
ex.printStackTrace();
return "Exception thrown.";
}
}
public String setExpirationDate(
IDfSession mySession,
String docId,
String expDate)
{
// Instantiate a client.
IDfClientX clientx = new DfClientX();
try {
String result = "";
IDfTime expirationDate = clientx.getTime(
expDate,
IDfTime.DF_TIME_PATTERN1
);
// Get the document instance using the document ID.
IDfDocument doc =
(IDfDocument) mySession.getObject(new DfId(docId));
}
catch (Exception ex) {
ex.printStackTrace();
return "Exception thrown.";
}
}
try {
serviceLevel);
...
//Override doSave()
protected synchronized void doSave(
boolean saveLock,
String v,
Object[] args)
{
if (this.getAspects().findString("my_retention_aspect") < 0) {
MyAttachCallback myCallback = new MyAttachCallback();
this.attachAspect("my_retention_aspect", myCallback);
}
super.doSave(saveLock, v, args);
}
obj.save();
}
}
By default, dm_sysobject and its sub-types are enabled for aspects. This includes any custom
object sub-types. Any non-sysobject application type can be enabled for use with aspects using
the following syntax.
ALTER TYPE type_name ALLOW ASPECTS
Default aspects
Type definitions can include a default set of aspects. This allows you to modify the data model and
behavior for future instances. It also ensures that specific aspects are attached to all selected object
instances, no matter which application creates the object. The syntax is
All aspect attributes in a DQL statement must be fully qualified as aspect_name.attribute_name. For
example:
SELECT r_object_id, my_retention_aspect.retained_since
FROM my_sop WHERE my_retention_aspect.years_to_retain = 10
If more than one type is specified in the FROM clause of a DQL statement, aspect attributes should be
further qualified as type_name.aspect_name.attribute_name OR alias_name.aspect_name.attribute_name.
Aspect attributes specified in a DQL statement appear in a DQL statement like a normal attribute,
wherever legally permitted by the syntax.
Full-text index
By default, full-text indexing is turned off for aspects. You can control which aspect attributes have
full-text indexing using the following DQL syntax.
ALTER ASPECT aspect_name FULLTEXT SUPPORT ADD | DROP a1, a2,...
Object replication
Aspect attributes can be replicated in a second repository just as normal attributes are replicated
(“dump and load” procedures). However, the referenced aspects must be available on the target
repository.
This chapter covers some of the basics of working programmatically with virtual documents. It
includes the following sections:
• Understanding virtual documents, page 167
• Setting version labels, page 168
• Getting version labels, page 171
• Creating a virtual document, page 173
• Traversing the virtual document structure, page 176
• Binding to a version label, page 178
• Clearing a version label binding, page 181
• Removing a virtual document child, page 182
Virtual documents can be linked to what is always the CURRENT version of a child node, or they can
be linked to a specific version by its version label (either the system-supplied version number or a
user-supplied symbolic version label). For example, you could label a version of each document in a
publication August and create a virtual document linking to the versions of the nodes that have that
label. This way, regardless of the updates made to the document for September, the August virtual
document would point to the nodes used at the time of its publication.
To add a Set Version Labels button to the DFC Base Tutorial Frame
1. Create a JTextField control named jTextField_versionLabel.
2. Create a JButton control named jButton_setVersionLabel.
3. Update the TutorialCheckIn class to set the version label on check in.
4. Create the class TutorialSetVersion.
5. Add a button handler method for Set Version Label.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckinNode;
import com.documentum.operations.IDfCheckinOperation;
mySession = sessionManager.getSession(repositoryName);
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the version increment. In this case, the next major version
// ( version + 1)
cio.setCheckinVersion(IDfCheckinOperation.NEXT_MAJOR);
// After the item is created, you can get it immediately using the
// getNewObjectId method.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
else
{
String repositoryName = jTextField_repositoryName.getText();
String childIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
String versionLabel = jTextField_versionLabel.getText();
TutorialSetVersion tsv =
new TutorialSetVersion();
jTextArea_results.setText(
tsv.setVersion(
m_sessionManager,
repositoryName,
childIdString,
versionLabel
)
);
initDirectory();
getDirectory();
}
}
To add a Get Version Labels button to the DFC Base Tutorial Frame
1. If you haven’t done so already, create a JTextField control named jTextField_versionLabel.
2. Create a JButton control named jButton_getVersionLabel.
3. Create the class TutorialGetVersion.
4. Add a button handler method for Get Version Label.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfVersionLabels;
import com.documentum.fc.common.IDfId;
IDfVersionLabels vl = sysObj.getVersionLabels();
for (int i=0; i<vl.getVersionLabelCount();i++)
{
if (i==0) {
sb.append(vl.getVersionLabel(i));
}
else
{
sb.append ("," + vl.getVersionLabel(i));
}
}
return sb.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return ("Failed to get version labels for selected object.");
}
finally
{
sessionManager.release(mySession);
}
}
}
To add an Add Virtual Document Node button to the DFC Base Tutorial Frame
1. Create a JTextField control named jTextField_virtualDocumentParent.
2. Create a JButton control named jButton_setVirtualDocumentParent.
3. Create a JTextField control named jTextField_precedingSibling.
4. Create a JButton control named jButton_setPrecedingSibling.
5. Create a JButton control named jButton_clearSibling.
6. Create the class TutorialAddVirtualDocumentNode.
7. Add a button handler method for Set Virtual Document Parent.
.
8. Add a button handler method for Set Preceding Sibling.
9. Add a button handler method for Clear Sibling.
10. Add a button handler method for Add Virtual Document Node.
Example 8-6. Handler for the Set Virtual Document Parent button
private void setVirtualDocumentParent()
{
// Store the parent ID in a global variable for use by other operations.
m_parentId = m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
jTextField_virtualDocumentParent.setText(
jTextField_cwd.getText()+ "/" + list_id.getSelectedItem()
);
Example 8-9. Handler for the Add Virtual Document Node button
private void jButton_addVirtualDocumentNode_actionPerformed(ActionEvent e)
{
if (m_parentId == null)
{
jLabel_messages.setText("Please reset the virtual document parent.");
}
else
{
String repositoryName = jTextField_repositoryName.getText();
String childIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
String bindingLabel = jTextField_versionLabel.getText();
TutorialAddVirtualDocumentNode tavdn =
new TutorialAddVirtualDocumentNode();
jTextArea_results.setText(
tavdn.addNode(
m_sessionManager,
repositoryName,
m_parentId,
childIdString,
m_siblingId,
bindingLabel
)
);
initDirectory();
getDirectory();
m_parentId = null;
}
}
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.client.IDfVirtualDocumentNode;
import com.documentum.fc.common.IDfId;
// Instantiate the sibling object (if not null) as a virtual document node.
IDfVirtualDocumentNode siblingNode = null;
if (!siblingIdString.equals(null) & !siblingIdString.equals("")){
IDfId siblingId = mySession.getIdByQualification(
To add a Traverse Virtual Document button to the DFC Base Tutorial Frame
1. Create a jButton control named jButton_traverseVirtualDocument.
2. Create the TutorialTraverseVirtualDocument class.
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.client.IDfVirtualDocumentNode;
import com.documentum.fc.common.IDfId;
}
else
{
// Otherwise, capture interesting information about the child node.
sysObj = child.getSelectedObject();
results.append( "\nNode Name: " + sysObj.getObjectName() +
"\n Version: " + sysObj.getVersionLabel(0) +
"\n VDM Number: " + child.getVDMNumber() +
"\n Binding Label: " + child.getBinding());
}
}
return results.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return ("Traversal failed.");
}
finally
{
sessionManager.release(mySession);
}
}
}
TutorialTraverseVirtualDocument ttvd =
new TutorialTraverseVirtualDocument();
jTextArea_results.setText(
ttvd.traverseVirtualDocument(
m_sessionManager,
repositoryName,
docId
)
);
}
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.client.IDfVirtualDocumentNode;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckinNode;
if (sysObj != null)
{
(IDfSysObject) mySession.getObject(childId);
To add a Remove Virtual Document Child button to the DFC Base Tutorial Frame
1. Create a jButton control named jButton_removeVirtualDocumentChild.
2. Create the TutorialRemoveVirtualDocumentNode class.
3. Create the handler for the Remove Virtual Document Child button.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.client.IDfVirtualDocumentNode;
import com.documentum.fc.common.IDfId;
if (sysObj != null)
{
sessionManager.release(mySession);
}
}
}
Example 8-17. Handler for the Remove Virtual Document Child button
private void jButton_removeVirtualDocumentChild_actionPerformed(ActionEvent e)
{
if (m_parentId == null)
{
jLabel_messages.setText("Please reset the parent field.");
}
else
{
String repositoryName = jTextField_repositoryName.getText();
String childIdString =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
TutorialRemoveVirtualDocumentNode trvdn =
new TutorialRemoveVirtualDocumentNode();
jTextArea_results.setText(
trvdn.removeNode(
m_sessionManager,
repositoryName,
m_parentId,
childIdString
)
);
initDirectory();
getDirectory();
Because DFC is the principal low level interface to all Documentum functionality, there are many
DFC interfaces that this manual covers only superficially. They provide access to features that other
documentation covers in more detail. For example, the Server Fundamentals manual describes virtual
documents and access control. The DFC Javadocs provide the additional information necessary to
enable you to take advantage of those features. Similarly, the Federated Search Services (FS2) product
includes extensive capabilities for searching Documentum repositories. The DFC Javadocs provide
information about how to use DFC to access that functionality.
This chapter introduces some of the Documentum functionality that you can use DFC to access. It
contains the following major sections:
• Security Services, page 185
• XML, page 186
• Virtual Documents, page 186
• Workflows, page 186
• Document Lifecycles, page 187
• Validation Expressions in Java, page 187
• Unified Login, page 188
• Search Service, page 188
• WORM Support for Filestores, page 188
Security Services
Content Server provides a variety of security features. From the DFC standpoint, they fall into the
following categories:
• User authentication
Chapter 3, Sessions and Session Managers provides the detailed information.
• Object permissions
The EMC Documentum Server Fundamentals Guide and the DFC Javadocs for IDfACL, IDfPermit,
and other interfaces provides the detailed information.
DFC also provides a feature related to folder permissions. Users may have permission to view an
object but not have permission to view all of the folders to which it is linked. The IDfObjectPath
interface and the getObjectPaths method of IDfSession provide a powerful and flexible mechanism
for finding paths for which the given user has the necessary permissions. The Javadocs provide
more details.
XML
Chapter 6, Working with Document Operations provides some information about working with XML.
DFC provides substantial support for the Documentum XML capabilities. The EMC Documentum
XML Application Development Guide provide the details on how to use these capabilities.
Virtual Documents
Chapter 6, Working with Document Operations provides some information about working with
virtual documents. The EMC Documentum Server Fundamentals and the DFC Javadocs for the
IDfVirtualDocument interface provides the detailed information.
Workflows
The Server Fundamentals manual provides a thorough treatment of the concepts underlying
workflows. DFC provides interfaces to support the construction and use of workflows, but there
is almost no reason to use those interfaces directly. The workflow manager and business process
manager software packages handle all of those details.
Individual workflow tasks can have methods associated with them. You can program these methods
in Java and call DFC from them. These methods run on the method server, an application server that
resides on the Content Server machine and is dedicated to running Content Server methods. The
code for these methods resides in the repository’s registry as modules. Modules and registries, page
128 provides more information about registries and modules.
The com.documentum.fc.lifecycle package provides the following interfaces for use by modules that
implement lifecycle actions:
• IDfLifecycleUserEntryCriteria to implement userEntryCriteria.
• IDfLifecycleUserAction to implement userAction.
• IDfLifecycleUserPostProcessing to implement userPostProcessing
There is no need to extend DfService, but you can do so. You need only implement IDfModule,
because lifecycles are modules, not SBOs.
Document Lifecycles
The Server Fundamentals manual provides information about lifecycles. There are no DFC interfaces
for constructing document lifecycles. You can define actions to take place at various stages of a
document’s lifecycle. You can code these in Java to run on the Content Server’s method server. Such
Java methods must implement the appropriate interfaces from the following:
• IDfLifecycleAction.java
• IDfLifecycleUserAction.java
• IDfLifecycleUserEntryCriteria.java
• IDfLifecycleUserPostProcessing.java
• IDfLifecycleValidate.java
The code for these methods resides in the repository’s registry (see Modules and registries, page
128) as modules.
Unified Login
If you are using Unified Login with your DFC client there is an important requirement for
authentication to work. Unified Login relies on the Windows SID authentication mechanism. DFC
forms an SID based on the login info from the client machine and sends it to the server. The server
authenticates it using the Windows Domain Controller (DC). If the DC does not recognize the client
SID, then authentication will fail. The client SID should be recognized by the DC and the user
should be registered using the same domain as the repository. So the requirement is that client and
repository machine should be in the same workgroup or domain.
Search Service
The DFC search service replaces prior mechanisms for building and running queries. You can use the
IDfQuery interface, which is not part of the search service, for simple queries. The search service
provides the ability to run searches across multiple Documentum repositories and, in conjunction
with the Federated Search Services (FS2) product, external repositories as well.
The Javadocs for the com.documentum.fc.client.search package provide a description of how to use
this capability.
A com package, 24
abort method, 124, 126 commitTransaction method, 135
abortTransaction method, 135 config directory, 15
add method. See operations, add method containment objects. See virtual documents,
addNode method, 173 containment objects
aspects, 128 copy_child attribute, 86
assemblies. See virtual documents, CURRENT, 181
assemblies CURRENT version, 168
assembly objects. See virtual documents,
assembly objects D
asVirtualDocument method, 86, 101, 114,
datatypes, 70
120, 173
DBOR (Documentum business object
authentication
registry), 129
principal, 46
Desktop client program, 19
destroyAllVersions method, 82
B dfc.bof.cache.currency_check_interval, 17
beginTransaction method, 135 dfc.bof.cacheconsistency.interval
best practices property, 17
caching repository objects, 137 dfc.bof.registry.connect.attempt.interval
reusing SBOs, 137 property, 17
SBO state information, 137 dfc.config.check_interval property, 19
binding. See virtual documents, binding dfc.data.dir, 18
dfc.diagnostics.resources.enable, 18
dfc.globalregistry.password property, 17
C dfc.globalregistry.repository property, 17
caches, 137 dfc.globalregistry.username property, 17
See also persistent caching dfc.housekeeping.cleanup.interval
casting, 90, 92, 124 to 125, 152 property, 19
children. See virtual documents, dfc.properties, 20
terminology dfc.properties file, 16
chronicle ID, 178 dfc.registry.mode property, 19
chronicle IDs, 85 dfc.session.allow_trusted_login, 20
chunking rules, 87 dfcfull.properties file, 16
classes DfClientX class, 25
extending, 24 DfCollectionEx class, 133
instantiating, 24 DfDborNotFoundException class, 136
collections DfException class, 26, 124
leaks, 18 DfNoTransactionAvailableException
COM (Microsoft component object class, 134
model), 24 DfService class, 128, 131 to 132
DfServiceCriticalException class, 136 getSession method, 25, 132, 134, 136, 152
DfServiceException class, 136 getSessionManager method, 132, 152
DfServiceInstantiationException class, 136 getStatistics method, 134
DfServiceNotFoundException class, 136 getTransactionRollbackOnly method, 134
directories (file system), 107 getValidationOperation method, 120
dmc_module type, 128 getVendorString method, 131
DMCL process, 25 getVersion method, 131
docbroker, 18 getXMLTransformOperation method, 121
document manipulation. See operations to 123
DONT_RECORD_IN_REGISTRY field, 108 getYesNoAnswer method, 126
DTDs (document type definitions), 120 global registry, 16
dump method, 66
H
E HTML (Hypertext Markup Language), 122
edges. See virtual documents, terminology to 123
enableDeepDeleteVirtualDocumentsInFolders
method, 116
error handlers, 26, 124
I
execute method. See operations, execute IDfBusinessObject interface, 128
method IDfCancelCheckoutNode interface, 101
External applications IDfCancelCheckoutOperation
XML support, 87 interface, 101
IDfCheckinNode interface, 98
IDfCheckinOperation interface, 98
F IDfCheckoutOperation interface, 94
factory methods, 23 to 24, 131 IDfClient interface, 23, 25
Federated Search Services (FS2), 19 IDfClientX interface, 25, 88
FileOutputStream class, 121 IDfDeleteNode interface, 116
IDfDeleteOperation interface, 116
IDfDocument interface, 25, 92
G IDfExportNode interface, 110
getCancelCheckoutOperation method, 101 IDfExportOperation interface, 108
getCheckinOperation method, 98 IDfFile interface, 107
getChildren method, 91 IDfFolder interface, 60, 92
getDefaultDestinationDirectory IDfImportNode interface, 105, 107
method, 110 IDfImportOperation interface, 105, 122
getDeleteOperation method, 116 IDfList interface, 124
getDestinationDirectory method, 110 IDfLoginInfo interface, 152
getDocbaseNameFromId method, 133 IDfModule interface, 128
getErrors method, 91, 120, 124 IDfMoveOperation interface, 114
getExportOperation method, 108 IDfOperation interface, 89
getFilePath method, 110 IDfOperationError interface, 91, 124
getId method, 92 IDfOperationMonitor interface, 125
getImportOperation method, 105 IDfOperationNode interface, 90 to 91
getLocalClient method, 25, 152 IDfPersistentObject interface, 24 to 25
getMoveOperation method, 114 IDfProperties interface, 122
getNewObjects method, 100, 105, 107 IDfService interface, 128, 131
getNodes method, 91 IDfSession interface, 23, 25
getObjectID method, 92 IDfSessionManagerStatistics interface, 134
getSelectedObject method, 176
N R
naming conventions, 23, 131, 136
Reader class, 123
.NET platform, 127
release method, 25
newObject method, 25
releaseSession method, 132, 152
newService method, 131, 152
renditions, 123
newSessionManager method, 25, 152
reportError method, 126
NEXT_MAJOR field, 98
repositories, 128
nodes. See virtual documents, terminology;
operations, nodes
null returns, 124 S
SBOs. See service based objects
O schemas. See XML schemas
service based objects (SBOs), 130
OLE (object linking and embedding)
architecture, 131
links, 105
implementing, 131
OLE (Object Linking and Embedding)
instantiating, 137, 151
links, 20
returning TBOs, 152