OpenText Documentum Server CE 22.2 - Administration and Configuration Guide
OpenText Documentum Server CE 22.2 - Administration and Configuration Guide
EDCCS220200-AGD-EN-01
OpenText™ Documentum™ Server
Administration and Configuration Guide
EDCCS220200-AGD-EN-01
Rev.: 2022-June-08
This documentation has been created for OpenText™ Documentum™ Server CE 22.2.
It is also valid for subsequent software releases unless OpenText has made newer documentation available with the product,
on an OpenText website, or by any other means.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
One or more patents may cover this product. For more information, please visit https://www.opentext.com/patents.
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
1 Administration and configuration tools ................................ 17
1.1 Introduction ..................................................................................... 17
1.2 Documentum Administrator .............................................................. 18
1.3 Documentum Server Manager ......................................................... 18
1.3.1 Starting Documentum Server Manager ............................................. 19
1.4 Tool suite ....................................................................................... 19
1.1 Introduction
This guide is intended for system and repository administrators. The system
administrator installs and owns the Documentum installation. Repository
administrators owns and are responsible for one or more repositories. Readers
should be familiar with the general principles of client/server architecture and
networking. In addition, they should know and understand the Windows and Linux
operating systems.
The two main tools for administering and configuring Documentum Server are
Documentum Administrator and Documentum Server Manager. Documentum
Administrator is sold separately. It is not included with Documentum Server.
Most administration and configuration tasks are typically done using Documentum
Administrator. Some tasks have to be performed using Documentum Server
Manager or the command line.
This guide provides instructions for administration and configuration tasks using
Documentum Administrator, unless a task can only be accomplished using
Documentum Server Manager or the command line. In that case, the instructions are
provided using Documentum Server Manager or the command line, respectively.
OpenText Documentum Administrator User Guide contains the information on using
Documentum Administrator.
The following table briefly describes the tools that are available:
Tool Description
Archive Automates archive and restore operations
between content areas.
Audit Management Deletes audit trail objects.
Consistency Checker Checks the repository for consistency across
a variety of object types.
Content Replication Automates replication of document content
for distributed file stores.
Content Storage Warning Monitors disk space on the devices used to
store content and index files. Queues a
warning message when a disk used for
content and index storage reaches a user-
defined percentage of capacity.
Data Dictionary Publisher Publishes data dictionary information to
make it visible to users and applications.
Database Space Warning Monitors the RDBMS. Queues a warning
message when the RDBMS reaches a user-
defined percentage of capacity.
Tool Description
Dm_LDAPSynchronization Determines all changes in the LDAP
directory server information and propagates
the changes to the repository.
Dmclean Automates the dmclean utility, which
removes orphaned content objects, notes,
ACLs, and SendToDistribution workflow
templates from a repository.
Dmfilescan Automates the dmfilescan utility, which
removes orphaned content files from a
specified storage area of a repository.
Dmextfilescan Removes orphaned content files from S3 and
S3-compatible stores. “dmextfilescan utility”
on page 544 provides detailed information
about the dmextfilescan utility.
Distributed Operations Performs the distributed operations
necessary to manage reference links. This is
an internal job that is installed as part of the
tool suite. OpenText Documentum Platform and
Platform Extensions Installation Guide contains
more details.
File Report Utility Provides a report to help restore deleted or
archived document content using file system
backups. This method only applies to
distributed storage areas.
Group Rename Renames a group in the repository.
Index Agent Startup Starts the index agent.
Log Purge Deletes log files for the server, session,
connection broker, and agent and the trace
log files resulting from method and job
executions.
Queue Management Deletes dequeued objects in the
dmi_queue_item tables.
Remove Expired Retention Objects Removes objects with an expired retention
date, if they are stored in a Centera storage
area.
Rendition Management Removes of unwanted document renditions.
State of the Repository Report Provides information about the repository
status.
Swap Info Reports on swap space availability and
usage.
ToolSetup Installs system administration tools.
Update Stats Updates stored statistics on the underlying
RDBMS tables, to aid in query performance.
Tool Description
User Chg Home Db Changes the user home repository.
User Rename Changes a user name in the repository.
Version Management Removes of unwanted document versions.
When a client contacts the connection broker, the connection broker sends back the
IP address and port number of the server host. If there are multiple servers for the
repository, the connection broker returns connection information for all of them. The
client session then uses that information to choose a server and open the connection.
Clients can request a connection through a particular server, any server on a
particular host, or a particular server on a specified host.
The dfc.properties file contains most of the configuration parameters for handling
sessions. For example, the file contains keys that identify which connection brokers
are used to obtain a session, specify how many sessions are allowed for one user,
and enable persistent client caching. Many keys have default values, but an
administrator or application must explicitly set some of the keys. “dfc.properties
file” on page 137 contains more information on the dfc.properties file.
The standard connection broker that comes with a Documentum Server installation
can be customized for meet more specific requirements, such as:
These options are configured in the connection broker initialization file. Both, the
dfc.properties file and the connection broker initialization file cannot be modified
using Documentum Administrator.
A connection broker deletes a Documentum Server from its list of known servers
when:
The initialization file is a plain text file and has the following format:
[SECURITY]
password = string_value
allow_hosts=host_name_list|deny_hosts=host_name_list
[DOCBROKER_CONFIGURATION]
host = host_name|IP_address_string
service = service_name
port = port_number
keystore_file=broker.p12
keystore_pwd_file=broker.pwd
cipherlist=AES128-SHA
secure_connect_mode=native or secure or dual
[TRANSLATION]port=["]outside_firewall_port=inside_firewall_port
{,outside_firewall_port=inside_firewall_port}["]
host=["]outside_firewall_ip=inside_firewall_ip
{,outside_firewall_ip=inside_firewall_ip}["]
Note: The password key in the [SECURITY] section is only valid on Linux
platforms. Connection brokers on Windows platforms do not use the security
password. The port translation strings are enclosed in double quotes when
multiple ports or hosts are specified.
If the initialization file includes a valid service name, the connection broker is started
with the service name. If the initialization file does not provide a service name, but a
valid port number, the connection broker is started using the port number. If a
service name or a port number is not included, the connection broker is started on
port 1489.
If the connection broker is running as a service, edit the service entry to include the
argument on the command line. You can use the Documentum Server Manager to
edit the service entry.
If there are other arguments on the command line in addition to the initialization file
argument, the -init_file argument must appear first or it is ignored.
To define accepted servers, use the following format in the initialization file:
[SECURITY]
...
allow_hosts=<host_name_list>
<host_name_list> is a list of the host machines on which the servers reside. Separate
multiple host names using commas. For example:
[SECURITY]
...
deny_hosts=<bigdog,fatcat,mouse>
The options are mutually exclusive. For each connection broker, you can configure
either the accepted servers or the rejected servers, but not both.
• Native: Connections are in the raw RPC format without any encryption
• Secure: SSL mode is:
where
When the connection broker receives the request, it translates 4532 to 2231 and
172.18.23.257 to 2.18.13.211. It sends the values 2231 and 2.18.13.211 back to
application B, which uses them to establish the connection.
If you specify multiple ports or hosts for translation, enclose the translation string in
double quotes. For example:
[TRANSLATION]
port="2253=6452,2254=6754"
3. Restart the connection broker, specifying the initialization file on the command
line.
Connection broker projection targets are defined in the server configuration object.
The Documentum Server installation procedure defines one connection broker
projection target in the server.ini file. The target is the connection broker that is
specified during the installation procedure. When the installation procedure is
complete, the target definition can be moved to the server configuration object.
The server proximity value is specified in the network location section of the server
configuration object, as described in “Creating or modifying network locations”
on page 46.
An individual server that has multiple connection broker projection targets can
project a different proximity value to each target. Guidelines for setting proximity
values are:
For example:
d:\documentum\product\6.0\bin\dmdocbroker.exe
-init_file DocBrok.ini -host engr -service engr_01
3. At the operating system prompt, type the command line for the
dm_launch_docbroker utility. The command-line syntax is:
dm_launch_docbroker [-init_file <file_name>] [-host <host_name>] [-service
<service_name>] [-port <port_number>]
Include the host name and a service name or port number to identify the
connection broker. Otherwise specify an initialization file that includes a
[DOCBROKER_CONFIGURATION] section to identify the connection broker.
By default, the first connection broker listed in the file handles the connection
request. If that connection broker does not respond within 15 seconds or less, the
request is forwarded to the next connection broker that is listed in the dfc.properties
file. The request is forwarded sequentially until a connection broker responds
successfully or until all connection brokers defined in the file have been tried. If
there is no successful response, the connection request fails.
or
[DOCBROKER_CONFIGURATION]
host=<host_machine_name>
port=<port_number>
The service name is the connection broker service name, defined in the host machine
services file. The port number is the port defined in the service. OpenText
Documentum Platform and Platform Extensions Installation Guide contains instructions
on setting up service names. If you include a service name, the connection broker is
started using that service. Otherwise, if you include a port number, the connection
broker is started using that port. If you do not include a service name or a port
number, the connection broker uses port number 1489.
• On Windows:
d:\documentum\product\<version>\bin\dmdocbroker.exe
[-init_file <filename> ]-host <host_name> -service <service_name>
• On Linux:
dm_launch_docbroker [-init_file <filename>]-host <host_name>
-service <service_name>
Include the host name and a service name or port number to identify the
connection broker. Otherwise, specify an initialization file that includes a
[DOCBROKER_CONFIGURATION] section to identify the connection broker.
2. Modify the projection properties in the server configuration object to add the
new connection broker as a server projection target.
3. Optionally, modify the server.ini file to add the new connection broker as a
server projection target.
If the connection broker was configured with shutdown security, supply the correct
password to shut down the connection broker.
The utility stops the connection broker that is running on the host specified in the
dmshutdown script. That connection broker was specified during the initial server
installation. If you are executing the dm_stop_docbroker utility in interactive mode
(without the -B flag), the utility displays which connection broker it intends to stop
and prompts for a confirmation. If you include the -B flag, the utility does not
prompt for confirmation or display which connection broker it is stopping. The
default for the -B flag is FALSE.
If you have multiple connection brokers on one machine, you can include the -N and
-S arguments to identify a particular connection broker to shut down.
The <password> is the password specified in the connection broker initialization file.
“Configuring shutdown security (Linux only)” on page 25 contains more
information on connection broker security. If the connection broker initialization file
contains a password, supply this password to stop the connection broker.
If you cannot use the dm_stop_docbroker utility, you can use the Linux kill
command to stop the connection broker if it was started without security. If you do
not know the process ID for the connection broker, you can obtain it using the Linux
ps command. You cannot use the Linux kill command to stop a connection broker
that was started with security. Only the Linux kill -9 command stops a secured
connection broker.
To stop multiple connection brokers, add a line one for each host on which a
connection broker resides to the script. For example, connection brokers are running
on hosts named lapdog, fatcat, and mouse. To stop all three connection brokers, edit
dm_stop_docbroker to include these three lines:
./dmshutdown docbroker -Tlapdog -P$password $@
#lapdog is the host name
If all connection brokers use the same password, the dm_stop_docbroker utility
substitutes the password specified on the command line for $password in the script.
If each connection broker requires a different password, add the password for each
connection broker in the script:
./dmshutdown docbroker -Tlapdog -Pbigbark $@
#lapdog is the host name
The default Documentum Server installation starts one Documentum Server for a
repository. However, a repository can have more than one Documentum Server. If a
repository is very active, serving many users, or its users are widely spread
geographically, multiple servers can provide load balancing and enhance
performance. “Adding or modifying Documentum Servers” on page 38 contains
more information on adding Documentum Servers to a repository.
Repositories are comprised of object type tables, type indexes, and content files. The
type tables and type indexes are tables in an underlying relational database. The
content files are typically stored in directories on disks in a given installation.
However, content files can also be stored in the database, in a retention storage
system such as Centera or NetApp SnapLock, or on external storage devices.
2. Navigate to Start > Programs > Documentum > Documentum Server Manager.
3. Click the DocBroker tab, select a connection broker, then click Start.
4. Click the Repository tab, select a repository, then click Start to start
Documentum Server.
6. Right-click Documentum Java Method Server in the Services list, then click
Start to start the Java Method Server.
• Collaboration Edition
• Federated Record Services
• Physical Records Management
• Records Manager
• Retention Policy Services
All server configuration objects for the current repository are listed on the
Documentum Server Configuration page in Documentum Administrator, as
described in “Server configuration object information” on page 37.
Server configuration objects are stored in the repository System cabinet. You can add
Documentum Servers by creating multiple server configuration objects, as long as
they are uniquely named. You can also modify a server configuration object and
save it as a new object.
Column Description
Name The name of the server configuration object.
Host The name of the host on which the
Documentum Server associated with the
server configuration object is running.
Column Description
Operational Status The current running status and dormancy
status separated by a comma of the
Documentum Server. Valid values are:
Running Status:
• Running
• Unknown
Dormancy Status:
• Dormancy Requested
• Projected Dormancy
• Dormant
• Active
• Invalid
Tab Description
Info Select the Info tab to view or modify
information on the server host, the platform
on which the server is running, code pages
and locales, and other general information.
Connection Brokers Select the Connection Brokers tab to view or
modify connection broker projections.
Network Locations Select the Network Locations tab to view or
modify the proximity values for the
associated network locations.
App Servers Select the App Servers tab to add an
application server for Java method execution.
Tab Description
Cached Types Select the Cached Types tab to specify which
user-defined types are to be cached at server
startup.
Locations Select the Locations tab to view the locations
of certain files, objects, and programs that
exist on the server host file system, including
the assume user program, change password
program, log file.
Far Stores Select the Far Stores tab to view accessible
storage areas and to designate far stores. A
server cannot store content in a far store.
OpenText Documentum Platform and Platform
Extensions Installation Guide contains more
information on far stores.
Field Value
Name The name of the initial server configuration
object created. By default, the server
configuration object has the same name as
the repository. When you create a new server
configuration object, you assign it a new
name.
Host Name The name of the host on which the server is
installed. Read-only.
Server Version The version, operating system, and database
of the server defined by the server
configuration object. Read-only.
Field Value
Process ID The process ID of server on its host. Read-
only.
Install Owner The Documentum installation owner. Read-
only.
Install Domain On Windows, the domain in which the
server is installed and running. Read-only.
Trusted Mode Indicates whether Trusted Content Services
is enabled. Read-only.
Dormancy Status Indicates the dormancy status of
Documentum Server.
Field Value
Operator Name The name for the repository operator if the
repository operator is not explicitly named
on the dmarchive.bat command line or in the
Archive or Request method. This must be
manually configured. The default is the
owner of the server configuration object (the
repository owner).
Field Value
Default Client Codepage The default codepage for clients. The value is
determined programmatically and is set
during server installation. In general, it does
not need to be changed.
Options are:
• UTF-8
• ISO_8859-1
• Shift_JIS
• EUC-JP
• EUC-KR
• US-ASCII
• ISO_10646-UCS-2
• IBM850
• ISO-8859-8
Options are:
• UTF-8
• ISO_8859-1
• Shift_JIS
• EUC-JP
• EUC-KR
• US-ASCII
• ISO_10646-UCS-2
• IBM850
• ISO-8859-8
Turbo Backing Store The name of the file store storage area where
the server puts renditions generated by
indexing blob and turbo content. The default
is filestore_01.
Rendition Backing Store The name of the file store storage area where
the server will store renditions generated by
full-text indexing operations.
Modifications Comments Remarks on changes made to the server
configuration object in this version.
Field Value
SMTP Server The name of the computer hosting the SMTP
Server that provides mail services to
Documentum Server.
Field Value
System Shutdown Timeout The time in seconds that the workflow agent
attempts to shut down work items gracefully
after receiving a shutdown command. The
default value is 120 seconds.
Field Value
Default Alias Set The default alias set for new objects. Click
the Select Alias Set link to access the Choose
an alias set page.
Enabled LDAP Servers The LDAP configuration objects for LDAP
servers used for user authentication and
synchronization.
When a client requests a connection to a repository, the connection broker sends the
client the connection information for each server associated with the repository. The
client can then choose which server to use.
Field Value
Assume User The location of the directory containing the
assume user program. The default is
assume_user.
Change Password The location of the directory containing the
change password program. The default is
change_password.
Common The location of the common directory. The
default is common.
Events The location of the events directory. The
default is events.
Log The location of the logs directory. The
default is temp.
Nls The location of the NLS directory. The
default is a single blank.
Secure Writer The location of the directory containing the
secure writer program. The default is
secure_common_area_writer.
System Converter The location of the directory containing the
convert.tbl file and the system-supplied
transformation scripts. There is no default for
this field.
Temp The location of the temp directory.
User Converter The full path for the user-defined
transformation scripts. The default is
convert.
User Validation The full path to the user validation program.
The default is validate_user.
Field Value
Verity 5.3 and later repositories, contains a dummy
value for compatibility with Webtop 5.2.x.
The default value is verity_location.
Signature Check The location of the directory that contains the
signature validation program. The default is
validate_signature.
Authentication Plugin The location of an authentication plug-in, if
used. The default is auth_plugin in
$Documentum/dba/auth.
A Documentum Server can be moved to a dormant state only from an active state.
To perform this operation, you should be a member of the
dm_datacenter_managers, a privileged group whose membership is maintained by
superusers. This feature is only applicable for 7.0 and later versions of Documentum
Server.
A Documentum Server can be moved back to an active state only from a dormant
state. To perform this operation, you should be a member of the
dm_datacenter_managers, a privileged group whose membership is maintained by
superusers. This feature is only applicable for 7.0 and later versions of Documentum
Server.
Caution
To perform any of the view, create, update, or delete operations for a
Documentum Server which is in dormant state, you as a member of the
dm_datacenter_managers group should execute the action Enable Save
Operation, else view, create, update, or delete operations will fail.
Notes
Connection broker logs record information about connection brokers, which provide
connection information to clients.
Note: If you are connected to a secondary server, you see only the server and
connection broker logs for that server.
1. Using any client, connect to the repository whose server you are disabling as a
process engine.
4. Delete the object corresponding to the name of the server that is configured as a
process engine.
The properties in the server configuration object provide operating parameters and a
map to the files and directories that the server can access. At startup, a Documentum
Server always reads the CURRENT version of server configuration object.
The server.ini file contains information you provide during the installation process,
including the repository name and the repository ID. That information allows the
server to access the repository and contact the RDBMS server. The server.ini file also
contains the name of the server configuration object.
You can use Documentum Administrator to modify the server configuration object,
as described in “Adding or modifying Documentum Servers” on page 38. However,
the server configuration object pages in Documentum Administrator do not contain
all of the default and optional properties that are defined in the server.ini file. Some
those properties can only be modified in the server.ini file using the Documentum
Server Manager. Typically, only the installation owner can modify the server.ini file.
[DOCBROKER_PROJECTION_TARGET]
<key>=<value>
Only the [SERVER_STARTUP] section is required. The other sections are optional.
Changes to the server.ini file take effect only after the server is stopped and
restarted.
Note: To receive a verbose description of the server.ini file, type the following
command at the operating system prompt:
documentum -h
If you want to add a comment to the file, use a semicolon (;) as the comment
character.
To change the server.ini file, you must have appropriate access privileges for the
%DOCUMENTUM%\dba\config\<repository> ($DOCUMENTUM/dba/config/
<repository>) directory in which the file resides. Typically, only the installation owner
can access this directory.
4. Stop and restart the server to make the changes take effect.
“Keys in the SERVER_STARTUP section” on page 53, describes the keys in the
server startup section in alphabetical order.
Note: This is
supported only in
administration mode.
The method_server_threads
values are for the dmbasic
method server only and do not
have any impact on the Java
method server.
#tickets =
concurrent_sessions *
ticket_multiplier
3.5.2.1 concurrent_sessions
The concurrent_sessions key controls the number of connections the server can
handle concurrently. This number must take into account not only the number of
users who are using the repository concurrently, but also the operations those users
are executing. Some operations require a separate connection to complete the
operation. For example:
The default value is 100. Consider the number of active concurrent users you expect,
the operations they are performing, and the number of methods or jobs you execute
regularly, and then modify this value accordingly.
3.5.2.2 database_refresh_interval
The database_refresh_interval key defines how often the main server thread (parent
server) reads the repository to refresh its global caches. You can raise this value but
it cannot be lowered.
3.5.2.3 umask
On Linux platforms, permissions can be expressed as a 3-digit octal number,
representing the permission level for the owner, group owner, and others. For
example, a file created with permissions 700 grants the owner read, write and
execute permission, but the group owner and others get no permissions at all.
Permissions of 644 grants the owner read and write permission, but the group
owner and others only have read permission. Similarly, 640 gives the owner read
and write permission, the group owner read only permission and others get no
permissions.
The umask is also expressed as an octal number and is used to further restrict (or
mask) the permissions when a directory or file is created. For example, if the
requested permissions are 766 and the umask is 22, the actual permissions applied is
744. A bit set in the umask turns off the corresponding bit in the requested
permissions.
Documentum Server uses a umask key in the server.ini file that is separate from the
Linux per-process umask, but applies it in a similar fashion. The Documentum
Server internally refers to permissions with the symbolic names dmOSFSAP_Public,
dmOSFSAP_Private, dmOSFSAP_Public ReadOnly, dmOSFSAP_PrivateReadOnly,
and dmOSFSAP_PublicOpen. “Documentum Server directory and file permissions
on Linux platforms” on page 74 describes the associated permission values.
Note: The umask in the server.ini configuration file only modifies the values
for the dmOSFSAP_PublicOpen permissions. If the umask is not specified in
the server.ini file, the default setting for the dmOSFSAP_PublicOpen
permissions is 777 for directories and 666 for files. Any directories or files that
are publicly accessible outside the Documentum Server are created using this
permission.
Connection broker projection targets are also defined in a set of properties in the
server configuration object. We recommend to use the server configuration
properties to define additional projection targets, instead of the server.ini file.
Modifying the server configuration object allows changing a target without
restarting Documentum Server. If the same projection target is defined in both the
server configuration properties and in the server.ini file, Documentum Server uses
the values for the target in the server configuration properties.
“server.ini FUNCTION_EXTENT_SIZE keys” on page 76, lists the keys for the
FUNCTION_EXTENT_SIZE section.
“server.ini TYPE_EXTENT_SIZE keys” on page 77, lists the keys for the
TYPE_EXTENT_SIZE section.
• On Linux hosts:
– The installation account of the primary host must have sufficient privilege to
execute remsh commands.
– The primary host must be able to resolve target machine names through
the /etc/hosts file or DNS.
– Each target machine must be able to resolve the primary host name through
the /etc/hosts file or DNS.
• Programmatically and gracefully start or restart the Documentum Server and all
of its components in the correct sequence
• Programmatically and gracefully shut down the Documentum Server and all of
its components in the correct sequence
• Programmatically and gracefully fail over the Documentum Server and all of its
components in the correct sequence
• Monitor the performance of the Documentum Server and all of its components
The first three processes require that the operational status of Documentum Server
and its components be able to be placed into a dormant state. A dormant state
ensures that the transactions of Documentum Server that are in-progress are
completed but that no new transactions are started. The dormant state enables
Documentum Server to be gracefully started, restarted, shut down, or complete
failovers without transactions being lost or the state of Documentum Server
becoming inconsistent.
Figure 3-1 illustrates the interactions between the major Documentum platform
components.
The Java Method Server (JMS) is never placed into a dormant state, because in a JMS
high-availability configuration multiple Documentum Server instances could be
using the same JMS.
You can retrieve the following performance metrics for Documentum Server and
JMS servers:
• RPCs
Notes
• After getting a session, you can call all of the performance monitoring
methods, which are implemented in IDfSession.
• Only privileged users can call these methods.
• The DFC Javadoc contains more information and examples on implementing
these methods.
After getting a session, you can call all of the operational status methods, which are
implemented in IDfSession. Before changing the operational status of a repository,
Documentum Server, or ACS server, check their current operational status.
Figure 3-2 shows the Documentum Server state changes as a result of executing the
different DFC methods.
3.7.5.1 Overview
In a virtual deployment, you need to be able to restart applications if a Documentum
Server is being replaced. When you restart applications in a Documentum Server
deployment, restart the following components in the following order:
1. Connection brokers
4. Index agents
• All transactions that were running at the time the component was stopped are
restarted.
An exit status of zero (in %ERRORLEVEL%) indicates that the restart was
successful; a nonzero result indicates that the restart was unsuccessful.
where <service> is the name of the Documentum Server and repository Windows
service to start.
An exit status of zero (in %ERRORLEVEL%) indicates that the restart was
successful; a nonzero result indicates that the restart was unsuccessful.
where <service> is the name of the JMS service to start. The default name is
Documentum Java Method Server.
An exit status of zero (in %ERRORLEVEL%) indicates that the restart was
successful; a nonzero result indicates that the restart was unsuccessful.
where:
• <repository_name> is the name of the repository that the index agent runs
against.
• <usser_name> is the name of a user that has administrator permissions.
An exit status of zero (in %ERRORLEVEL%) indicates that the restart was
successful; a nonzero result indicates that the restart was unsuccessful.
Include the host name (<host_name>) and a service name (<service_name>) or port
number (<port_number>)to identify the connection broker. Otherwise specify an
initialization file (<file_name>) that includes a [DOCBROKER_CONFIGURATION]
section to identify the connection broker. An exit status of zero (in the $? variable)
indicates that the restart was successful; a nonzero result indicates that the restart
was unsuccessful.
An exit status of zero (in %ERRORLEVEL%) indicates that the restart was
successful; a nonzero result indicates that the restart was unsuccessful.
where:
• <repository_name> is the name of the repository that the index agent runs
against.
• <usser_name> is the name of a user that has administrator permissions.
An exit status of zero (in %ERRORLEVEL%) indicates that the restart was
successful; a nonzero result indicates that the restart was unsuccessful.
The Documentum Servers used for load balancing must project identical proximity
values to the connection broker. If the servers have identical proximity values,
clients pick one of the servers randomly. If the proximity values are different, clients
always choose the server with the lowest proximity value.
If a Documentum Server stops and additional servers are running against the
repository with proximity values less than 9000, the client library gracefully
reconnects any disconnected sessions unless one of the following exceptions occurs:
You must have system administrator or superuser user privileges to stop a server.
1. Navigate to Start > Control Panel > Administrative Tools > Services. Select the
Java Method Server, then click Stop.
2. Navigate to Start > Programs > Documentum > Documentum Server Manager,
click the Repository tab, select the repository, then click Stop.
3. Select the DocBroker tab, select the connection broker, then click Stop.
Caution
On Windows platforms, it is not recommended to use an
IDfSession.shutdown method to shut down the server. The method does
not necessarily shut down all relevant processes.
You must run the script on the machine where the server resides. To invoke the
script, issue the command:
dm_shutdown_<repository> [-k]
where <repository> is the name of the repository. The -k flag instructs the script to
issue an operating-system kill command to stop the server if the shutdown request
has not been completed in 90 seconds.
Note: You must pass the NO_IMMEDIATE parameter if you want the scripts to
wait for open transactions to be closed before shutting down Documentum
Server on Linux. The syntax format is:
shutdown,session[,immediate][,delete_entry]
2. In the new copy, edit the line immediately preceding shutdown,c,T by replacing
<repository name>with <repository_name>.<server_config_name>.
The line you want to edit is as shown:
./iapi <repository name> -U$DM_DMADMIN_USER -P -e << EOF
Use the name of the server configuration object that you created for the server.
For example:
./iapi devrepository2.server2 -U$DM_DMADMIN_USER -P -e << EOF
• On Windows, edit the Documentum Server service to add -oclean to the end
of the command line.
• On Linux, add the -oclean argument in the dm_start_<repositoryname>
command line.
On Windows:
1. Navigate to %DM_JMS_HOME%\bin.
2. Execute startMethodServer.cmd or start the Windows service for the server
instance.
For the Java method server, and the server instance hosting the Java method
server, the service name is Documentum Java Method Server. Starting that
service is equivalent to executing startMethodServer.cmd.
On Linux:
2. Navigate to $DM_JMS_HOME/bin.
3. Execute start MethodServer.sh.
On Windows:
1. Navigate to %DM_JMS_HOME%\bin.
2. Execute stopMethodServer.cmd or stop the Windows service for the server
instance.
For the Java method server, and the server instance hosting the Java method
server, the service name is Documentum Java Method Server. Stopping that
service is equivalent to executing stopMethodServer.cmd.
On Linux:
2. Navigate to $DM_JMS_HOME/bin.
3. Execute stopMethodServer.sh.
Both, the name and the location of the file can be modified using the -logfile
parameter on the server start-up command line. The server appends to the log file
while the server is running. If the server is stopped and restarted, the file is saved
and another log file started. The saved log files have the following format:
The utility output of the displays the cause of the error and possible solutions, as
described in the following example:
[DM_SERVER_E_NO_MTHDSVR_BINARY]
"%s: Method server binary is not accessible."
CAUSE: The method server binary "mthdsvr" is not under
$DM_HOME/bin or it does not have the proper permission.
ACTION: Make sure method server binary "mthdsvr" is under
$DM_HOME/bin and set execution permission for it.
4.1 Repositories
Repositories are comprised of object type tables, type indexes, and content files. The
type tables and type indexes are tables in an underlying relational database. The
content files are typically stored in directories on local disks. Content files can also
be stored in the database, in a retention storage system such as Centera or NetApp
SnapLock, or on external storage devices.
Information that about the repository is located in a server startup file. The startup
file is executed whenever the ACS server is started. The information in the startup
file binds the Documentum Server to the repository.
Only users with superuser privileges can view or modify the repository
configuration object of a repository. It is not possible to use Documentum
Administrator to create additional repositories or delete an existing repository.
Adding another repository requires running the Documentum Server installer.
Field Value
Database The name of the RDBMS vendor. Read-only.
Repository ID The repository ID number assigned during installation.
Read-only.
Federation Specifies if the repository is a member of a federation.
Read-only.
Security The security mode of the repository. ACL and None are
valid values. None means repository security is disabled.
ACL means access control lists (permission sets) are
enabled. Read-only.
Index Store The name of the tablespace or other storage area where
type indexes for the repository are stored.
Field Value
Partitioned Indicates whether the repository is partitioned. When a
repository is created or updated with partitioning enabled,
the Documentum Server sets the flag to True.
Field Value
Oldest Client Version Specifies which DFC version is used for chunked XML
documents. The value must be changed manually. If the
field is left blank, the DFC format is compatible with client
versions earlier than the current DFC version. If a particular
DFC version is specified, clients running an earlier DFC
version cannot access the chunked XML documents. Set the
property value to the client version number in the format
XX.YY, where X is the major version and Y is the minor
version.
Modifications Comment This field is optional. It can be used to add a comment
about any modifications made to the repository
configuration.
Default Application Permit The default user permission level for application-controlled
objects accessed through an application that does not own
the object. Valid values are:
• 2: Browse
• 3: Read
• 4: Relate
• 5: Version
• 6: Write
• 7: Delete
Field Value
Minimum Owner Extended Specifies the minimum owner extended permissions for an
Permission object owner after all ACL rules have been applied. One or
more extended permissions can be selected. By default, no
extended permission is selected. Valid values are:
• Execute Procedure
Users with superuser privileges can change the owner
of an item and run external procedures on certain types.
• Change Location
Allows the object owner to move the object in the
repository.
• Change State
Allows the object owner to change the state of a lifecycle
attached to the object.
• Change Permission
Allows the object owner to modify the basic permissions
associated with the object.
• Change Ownership
Allows the object owner to change the object owner.
• Extended Delete
Allows the object owner to delete the object.
• Change Folder Links
Allows the object owner to link or unlink the object to a
folder.
Authorization Settings
MAC Access Protocol The file-sharing software type in use for Macintosh clients.
Valid values are:
• None
• NT
• Ushare
• Double
Field Value
Authentication Protocol Defines the authentication protocol used by the repository.
• On Windows, if set to Domain Required, it indicates
that the repository is running in domain-required mode.
If a repository is running in domain-required mode, the
login domain value in the login ticket generated for a
user attempting to log in using an inline password
defaults to the domain of the superuser even if the user
on the login ticket belongs to a different domain. Users
from a different domain must specify the their login
domain name when logging into the repository.
If a repository is running in no-domain required mode,
users can login with only their user name and inline
password. It is also possible to prepend the user name
with a backslash (\). A backslash specifies an empty
domain.
• On Linux platforms, choose between Linux
authentication or Windows domain authentication.
Cached Query Effective Date Used to manage the client query caches. The default is
NULLDATE.
Run Actions As The user account that is used to run business policy
(document lifecycle) actions. Options are:
• Session User (default)
• Superuser
• Lifecycle Owner
• Named User
If selected, click the Select User link to access the
Choose a user page to select a user.
Login Tickets Specifies that all other repositories are trusted and login
tickets from all repositories are accepted by the current
repository.
If selected, Allow login tickets from repositories is not
displayed.
Allow login tickets from This field is only displayed if Allow all login tickets is not
repositories selected.
Field Value
Designate Login Ticket Specifies the earliest possible creation date for valid login
Expiration tickets. Tickets issued before the designated date are not
valid in the current repository. Select one of the following:
• Null date: Specifies that there is no cutoff date for login
tickets in the current repository. This is the default
value.
• Expire login tickets on the following date and time:
Specifies the earliest possible creation date for valid
login tickets. Use the calendar control to choose the
correct date and time.
Login tickets created before that time and date cannot be
used to establish a connection. Currently connected
users are not affected by setting the time and date.
Maximum Authentication The number of times user authentication can fail for a user
Attempts before that user is disabled in the repository.
Authentication attempts are counted for logins and API
methods requiring the server to validate user credentials,
including the Changepassword, Authenticate, Signoff,
Assume, and Connect methods.
Field Value
Synchronization Settings For repositories where Offline Client is
enabled. Options are:
• None: no content synchronization to local
machine.
No content is synchronized to the local
machine. This is the default setting for a
repository.
• Basic: 1-way download to local machine
as read only.
Content is downloaded to the local
machine and marked read-only. No
content is uploaded from the local
machine.
• Role-Based: assign sync permissions to
specific repository roles.
Synchronization permissions are based
on specific user roles. Synchronization is
enabled and users can download content.
Whether content can be uploaded
depends on a particular role of user. If
selected, you must designate the
synchronization roles and check-in
settings.
Synchronization Role If you selected role-based synchronization
where Offline Client is enabled, you must
designate the roles. Click the Select users/
groups for offline upload link to access the
Chose a user/group page to select users who
you want to use offline upload.
Check In Settings Select to use the client dialog or the local
settings of user to check content into the
repository. Options are:
• Always Use client dialog to check in
content
• Use User’s local check in setting
A repository can be moved to a dormant state only from an active state. To perform
this operation, you should be a member of the dm_datacenter_managers, a
privileged group whose membership is maintained by superusers. This feature is
only applicable for 7.0 and later versions of repositories.
Notes
A repository can be moved back to an active state only from a dormant state. To
perform this operation, you should be a member of the dm_datacenter_managers, a
privileged group whose membership is maintained by superusers. This feature is
only applicable for 7.0 and later versions of repositories.
Note: When a repository is moved to an active state, the status of all the
Documentum Servers and ACS servers for this repository will also be moved
to an active state.
password_of_user
3. Open the dfc.properties file in a text editor and modify the following attributes:
dfc.globalregistry.repository=global_registry_repository_name
dfc.globalregistry.username=user_login_name
dfc.globalregistry.password=encryped_password_of_user
“Default users created during repository configuration” on page 108 lists the default
users that are created during repository configuration.
Group Members
admingroup installation_owner, repository_owner
docu repository_owner, installation_owner,
dm_autorender_win32, dm_autorender_mac,
dm_mediaserver
queue_admin None.
• MAKE_INDEX
“MAKE_INDEX” on page 362 contains more information.
• MOVE_INDEX
By default, type tables and indexes are stored in the same tablespace or segment.
However, you can create a repository with separate tablespaces or segments for
each or you can move the indexes later, using the MOVE_INDEX method.
Indexes that you create can be placed in any directory. “MOVE_INDEX”
on page 362 contains more information.
OpenText Documentum Platform and Platform Extensions Installation Guide contains
more information on creating a repository with separate tablespaces,
• DROP_INDEX
Removes a user-defined index. It is strongly recommended to not remove any of
the system-defined indexes. “DROP_INDEX” on page 358 contains more
information.
The r_normal_tz property in the docbase config object controls how Documentum
Server stores dates in the repository. If the property value is 0, all dates are stored in
UTC time. Otherwise, dates are normalized using the local time of Documentum
Server before being stored in the repository. Offset value is time zone offset from
UTC time, expressed as seconds and it is set by Documentum Server during the
upgrade of existing repositories (repositories upgraded from 5.x versions to later).
For example, if the offset represents the Pacific Standard Time zone, the offset value
is -8*60*60, or -28800 seconds. When the property is set to an offset value,
Documentum Server stores all date values based on the time identified by the time
zone offset.
Dump files are created by using the session code page. For example, if the session in
which the dump file was created was using UTF-8, the dump file is a UTF-8 dump
file. The repository into which the dump file is loaded must use the same code page
the source repository.
Dump and load operations can be performed manually using either IAPI, Docbasic
scripts, or the IDfDumpRecord and IDfLoadRecord DFC interfaces.
Note: Dump and load operations require additional steps for repositories
where Web Publisher is installed.
A load object record object contains information about one specific object that is
loaded from the dump file into a repository. Load object record objects are used
internally.
Note: This information does not apply to dump and load operations that are
used to execute object replication jobs.
However, when you dump an object, the server includes any objects referenced by
the dumped object. This process is recursive, so the resulting dump file can contain
many more objects than the object specified in the type, predicate, and predicate2
repeating properties of the dump record object.
When dumping a type that has a null supertype, the server also dumps all the
objects whose r_object_ids are listed in the ID field of the type.
The type property is a repeating property. The object type specified at each index
position is associated with the WHERE clause qualification defined in the predicate
at the corresponding position.
The dump operation dumps objects of the specified type and any of its subtypes that
meet the qualification specified in the predicate. Consequently, it is not necessary to
specify each type by name in the type property. For example, if you specify the
SysObject type, then Documentum Server dumps objects of any SysObject or
SysObject subtype that meets the qualification.
Use the following guidelines when specifying object types and predicates:
• The object type must be identified by using its internal name, such as
dm_document or dmr_containment.
Object type definitions are only dumped if objects of that type are dumped or if
objects that are a subtype of the type are dumped.
This means that if a subtype of a specified type has no objects in the repository or
if no objects of the subtype are dumped, the dump process does not dump the
definition of subtype. For example, suppose you have a subtype of documents
called proposal, but there are no objects of that type in the repository yet. If you
dump the repository and specify dm_document as a type to dump, the type
definition of the proposal subtype is not dumped.
This behavior is important to remember if you have user-defined subtypes in the
repository and want to ensure that their definitions are loaded into the target
repository.
• To dump subtype definitions for types that have no objects instances in the
repository or whose objects are not dumped, you must explicitly specify the
subtype in the dump script.
• If you have created user-defined types that have no supertype, be sure to
explicitly include them in the dump script if you want to dump objects of those
types. For example, the following commands will include all instances of
your_type_name:
append,c,l,type
your_type_nameappend,c,l,predicate
1=1
• If you have system or private ACLs that are not currently associated with an
object, they are not dumped unless you specify dm_acl as a type in the dump
script. For example, include the following lines in a dump script to dump all
ACLs in the repository (including orphan ACLs):
append,c,l,type
dm_acl
append,c,l,predicate
1=1
• By default, storage area definitions are only included if content associated with
the storage is dumped. If you want to dump the definitions of all storage areas,
even though you may not dump content from some, include the storage type (file
store, linked, and distributed) explicitly in the dump script.
• When you dump the dm_registered object type, Documentum Server dumps
only the object (dm_registered) that corresponds to the registered table. The
underlying RDBMS table is not dumped. Use the dump facilities of the
underlying RDBMS to dump the underlying table.
You must supply a predicate for each object type you define in the type property. If
you fail to supply a predicate for a specified type, then no objects of that type are
dumped.
To dump all instances of the type, specify a predicate that is true for all instances of
the type, such as 1=1.
To dump a subset of the instances of the object type, define a WHERE clause
qualification in the predicate properties. The qualification is imposed on the object
type specified at the corresponding index level in the type property. That is, the
qualification defined in predicate[0] is imposed on the type defined in type[0], the
qualification defined in predicate[1] is imposed on the type defined in type[1], and
so forth.
For example, if the value of type[1] is dm_document and the value of predicate[1] is
object_name = ‘foo’, then only documents or document subtypes that have an object
name of foo are dumped. The qualification can be any valid WHERE clause
qualification. The OpenText Documentum Server DQL Reference Guide contains the
description of a valid WHERE clause qualification.
Important: If you use the predicate2 property at any index position, you must also set
the predicate2 property at all index positions before the desired position.
Documentum Server does not allow you to skip index positions when setting
repeating properties. For example, if you set predicate2[2] and predicate2[4], you
must also set predicate2[0], predicate2[1], and predicate2[3]. It is valid to set the
values for these intervening index positions to a single blank.
By default, if the content is stored in a file store, Centera storage area, or NetApp
SnapLock storage area, the content is not included in the dump file. You can set the
include_content parameter to include such content. If you are dumping a repository
that has encrypted file store storage areas, you must include the content in the dump
file. Documentum Server decrypts the content before placing it into the dump file.
“Dumping without content” on page 115 describes the default behavior and
requirements for handling dump files without content. “Including content”
on page 116 describes how to include content and the requirements for dump files
with content.
If the content is stored in a blob or turbo storage area, the content is automatically
included in the dump file because the content is stored in the repository.
When the dump file is loaded into the target repository, any file systems referenced
for content must be visible to the server at the target site. For content in retention
storage, the ca_store object at the target site must have an identical definition as the
ca_store object at the source repository and must point to the same storage system
used by the source repository.
In the target repository, the storage objects for the newly loaded content must have
the same name as the storage objects in the source repository but the filepaths for the
storage locations must be different.
The owner of the target repository must have Read permission in the content storage
areas of the dumped repository when the load operation is executed. The load
operation uses the target repository owner account to read the files in the source
repository and copy them into the target repository.
In the target repository, the storage objects for the newly loaded content must have
the same names as those in the source repository, but the actual directory location,
or IP address for a retention store, can be different or the same.
Always include content if you are dumping a repository to make a backup copy, to
archive a repository, or to move the content or if the repository includes an
encrypted storage area.
When you include content, you can create a compressed dump file to save space. To
compress the content in the dump file, set the dump_parameter property to
compress_content = T.
You can improve the performance of a large dump operation by setting a larger
cache size. If you do not specify a cache size, the server uses a default size of 1 MB,
which can hold up to 43,690 object IDs.
To increase the cache size, set the cache_size argument of the dump_parameter
property to a value between 1 and 100. The value is interpreted as megabytes and
defines the maximum cache size. The memory used for the cache is allocated
dynamically as the number of dumped objects increases.
Documentum Server ignores the cache setting when doing a full repository dump.
To use a script:
1. Write a script that creates the dump object, sets its properties, saves the object,
and checks for errors.
If you do not set the file_name property to a full path, Documentum Server
assumes the path is relative to the root directory of the server. The filename
must be unique within its directory. This means that after a successful load
operation that uses the dump file, you must move the dump file to archival
storage or destroy it so you can successfully execute the script later.
2. Use IAPI to execute the script. Use the following command-line syntax:
iapi source_db -Uusername -Ppassword < script_filename
where:
• username is the user name of the user who is executing the operation.
3. If the dump was successful, destroy the dump object. If the Save on the dump
operation did not return OK, the dump was not successful.
Destruction of the dump object cleans up the repository and removes the dump
object records and state information that are no longer needed.
The script assumes that you want to dump all instances of the types, not just a
subset. Consequently, the predicates are set as 1=1 (you cannot leave them blank). If
you want to dump only some subset of objects or want to include all ACLs, type
definitions, or storage area definitions, modify the script accordingly.
save,c,l
getmessage,c
Notes
• Destroy the dump file (target file named in the script) if it exists and then re-
execute the script.
If the specified file already exists when you try to save a new dump record
object, the save operation fails. Re-executing the script creates a new dump
record object.
• If the dump operation is restartable, fetch the existing dump object from the
source repository and save it again. Saving the object starts the dump operation.
Documentum Server begins where it left off when the crash occurred.
If your operating system is configured to allow files larger than 2 GB, the dump file
can exceed 2 GB in size. If you create a dump file larger than 2 GB, you cannot load
it on a machine that does not support large file sizes or large file systems.
If the dump file does not include the actual content files associated with the objects
you are loading, the operation reads the content from the storage areas of the
dumped repository. This means that the owner of the repository that you are
loading must have Read privileges at the operating system level for the storage areas
in the source repository.
The load operation generates a dmi_queue_item for the dm_save event for each
object of type SysObject or a subtype that is loaded into the target repository. The
event is queued to the dm_fulltext_index_user user account. This ensures that the
objects are added to the index of target repository. You can turn off this behavior.
“Turning off save event generation during load operations” on page 121 contains the
instructions.
Generally, when you load objects into a repository, the operation does not overwrite
any existing objects in the repository. However, in two situations overwriting an
existing object is the desired behavior:
In both situations, the content object that you are loading into the repository could
already exist. To accommodate these instances, the load record object has a relocate
property. The relocate property is a Boolean property that controls whether the load
operation assigns new object IDs to the objects it is loading.
The type and predicate properties are for internal use and cannot be used to load
documents of a certain type.
If you dump and load job objects, the load operation automatically sets the job to
inactive in the new repository. This ensures that the job is not unintentionally started
before the load process is finished and it allows you the opportunity to modify the
job object if needed. For example, to adjust the scheduling to coordinate with other
jobs in the new repository.
The load operation sets jobs to inactive (is_inactive=TRUE) when it loads the jobs,
and sets the run_now property of jobs to FALSE.
If the load operation finds an existing job in the target repository that has the same
name as a job it is trying to load, it does not load the job from the dump file.
When you load a registered table, the table permits defined for that table are carried
over to the target repository.
During a load operation, every object of type SysObject or SysObject subtype loaded
into the target repository generates a save event. The event is queued to the
dm_fulltext_index_user. This behavior ensures that the object is added to the target
index of the repository.
New repositories are not empty. They contain various cabinets and folders created
by the installation process, such as:
When you load a dump file into a new repository, these objects are not replaced by
their counterparts in the dump file because they already exist in the new repository.
However, if you have changed any of these objects in the source repository (the
source of the dump file), the changes are lost because these objects are not loaded.
For example, if you have added any users to the docu group or if you have altered
permissions on the System cabinet, those changes are lost.
To make sure that any changes you have made are not lost, fetch from the source
repository any of the system objects that you have altered and then use the Dump
method to get a record of the changes. For example, if the cabinet of repository
owner was modified, use the following command sequence to obtain a listing of its
property values:
fetch,c,cabinet_iddump,c,l
After the load operation, you can fetch and dump the objects from the new
repository, compare the new dump results with the previous dump results, and
make any necessary changes.
Documentum provides a utility that you can run on a dump file to tell you what
objects that you must create in the new repository before you load the dump file.
The utility can also create a DQL script that you can edit and then run to create the
needed objects. The syntax for the preload utility is:
preload repository [-Uusername] -Ppassword -dump_file filename [-script_file name]
• repository is the name of the repository into which you are loading the dump file.
• filename is the name of the dump file.
• name defines a name for the output DQL script.
Note: This utility does not report all storage areas in the source repository, but
only those that have been copied into the dump file.
Use the procedure in this section to load a dump file into a new repository.
Note: You cannot perform this procedure in an explicit transaction because the
load operation performs periodic commits to the repository. Documentum
Server does not allow you to save the load record object to start the load
operation if you are in an explicit transaction.
Notes
• If the repository shares any directories with the source repository, you
must assign the repository an ID that differs from the source repository
ID.
• If the old and new repositories have different owners, make sure that
the new repository owner has Read privileges in the storage areas used
by the old repository if the old repository was not dumped with the
include_content property set to TRUE.
2. Create the necessary storage objects and associated location objects in your new
repository.
Each storage object in your source repository must have a storage object with
the same name in the new repository. The filestore objects in the new repository
must reference location objects that point to actual directories that differ from
those referenced by the location objects in the source repository.
For example, suppose you have a file store object with the name storage_1 in
your source repository that points to the location object named engr_store,
which references the d:\documentum\data\engr (/u04/home/
<installation_owner>/data/engr) directory. In the new repository, you must create
a file store object with the name storage_1 that references a location object that
points to a different directory.
Note: The location objects can be named with different names or they can
have the same name. Either option is acceptable.
3. If your storage areas in the source repository had associated full-text indexes,
create corresponding fulltext index objects and their location objects in the new
repository. Note that these have the same naming requirements as the new
storage objects described in “Load procedure for new repositories” on page 123.
5. Log in as the owner of the installation and use IAPI to execute the script.
When you start IAPI, connect to the new repository as a user who has Sysadmin
privileges in the repository.
6. After the load completes successfully, you can destroy the load object:
destroy,c,load_object_id
Notes
• Destroying the load object cleans the load object record objects that are
generated by the loading process and old state information.
• If you created the dump file by using a script, move the dump file to
archival storage or destroy it after you successfully load the file. You
cannot successfully execute the script again if you leave the dump file in
the location where the script created it. Documentum Server does not
overwrite an existing dump file with another dump file of the same
name.
• If Documentum Server crashes during a load, you can fetch the Load
Object and save it again, to restart the process. Documentum Server
begins where it left off when the crash occurred.
4.2.8.12.8 DocApps
DocApps are not dumped when you dump a repository. Consequently, after you
load a new repository, install and run the DocApp installer to reinstall the DocApps
in the newly loaded repository.
To clean a repository:
To delete a rendition (without deleting its source document, first update the
content object for the rendition to remove its reference to the source document.
For example, the following UPDATE...OBJECT statement updates all server-
and user-generated renditions created before January 1, 2000:
UPDATE "dmr_content" OBJECTS
SET "parent_count" = 0,
TRUNCATE "parent_id",
TRUNCATE "page"
WHERE "rendition" != 0 AND "set_time" < DATE('01/01/2000')
The updates in the preceding example statement detach the affected renditions
from their source documents, effectively deleting them from the repository.
4. Clean the temp directory by deleting the temporary files in that location.
You can determine the location of the temp directory with the following query:
SELECT "file_system_path" FROM "dm_location"
WHERE "object_name" = 'temp'
7. Delete or archive old server logs, session logs, trace files, and old versions of the
product.
Session logs are located in the %DOCUMENTUM%\dba\log\repository_id
($DOCUMENTUM/dba/log/repository_id) directory.
Documentum Server and connection broker log files are found in the
%DOCUMENTUM%\dba\log ($DOCUMENTUM/dba/log) directory. The
server log for the current server session is named repository_name.log. The log
for the current instance of the connection broker is named
docbroker.docbroker_hostname.log. Older versions of these files have the
extension .save and the time of their creation appended to their name.
On Windows, you can use the del command or the File Manager to remove
unwanted session logs, server logs, and connection broker logs. On Linux, use
the rm command.
The Consistency Checker tool is a job that can be run from the command line or
using Documentum Administrator. “Running jobs” on page 455 contains more
information on running the job in Documentum Administrator.
The job generates a report that lists the checked categories and any inconsistencies
that were found. The report is saved in the /System/Sysadmin/Reports/
ConsistencyChecker directory. If no errors are found, the current report overwrites
the previous report. If an error is found, the current report is saved as a new version
of the previous report. By default, the Consistency Checker job is active and runs
once a day.
#################################################
##
## CONSISTENCY_CHECK: Users & Groups
##
## Start Time: 09-10-2002 10:15:55
##
##
#################################################
##################################################
##
##
## CONSISTENCY_CHECK: ACLs ##
##
## Start Time: 09-10-2002 10:15:55
##
##
##################################################
################################################
##
##
## CONSISTENCY_CHECK: Sysobjects
##
##
## Start Time: 09-10-2002 10:15:58
##
##
#################################################
primary cabinets
Checking for sysobjects with non-existent i_chronicle_id
Checking for sysobjects with non-existent i_antecedent_id
Checking for sysobjects with missing
dm_sysobject_r entries
Checking for sysobjects with missing
dm_sysobject_s entry
#################################################
##
##
## CONSISTENCY_CHECK: Folders and Cabinets
##
## Start Time: 09-10-2002 10:16:02
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Documents
##
##
## Start Time: 09-10-2002 10:16:03
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Content
##
## Start Time: 09-10-2002 10:16:03
##
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Workflow
##
##
## Start Time: 09-10-2002 10:16:03
##
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: Types
##
## Start Time: 09-10-2002 10:16:04
##
##
################################################
#################################################
##
##
## CONSISTENCY_CHECK: Data Dictionary
##
## Start Time: 09-10-2002 10:16:04
##
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: Lifecycles
##
## Start Time: 09-10-2002 10:16:11
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: FullText
##
## Start Time: 09-10-2002 10:16:11
##
################################################
#################################################
##
##
## CONSISTENCY_CHECK: Indices
##
## Start Time: 09-10-2002 10:16:11
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: Methods
##
## Start Time: 09-10-2002 10:16:11
##
################################################
1. Click Start > Programs > Documentum > Documentum Server Manager to start
the Documentum Server Manager.
2. Click the Utilities tab, then click Server Configuration.
The Documentum Server Configuration Program starts.
3. Click Next and follow the configuration instructions on the screen.
4.4 Federations
A federation is a set of two or more repositories bound together to facilitate the
management of a multi-repository distributed configuration. Federations share a
common name space for users and groups and project to the same connection
brokers.
Global users, global groups, and global permission sets are managed through the
governing repository, and have the same property values in each member repository
within the federation. For example, if you add a global user to the governing
repository, that user added to all the member repositories by a federation job that
synchronizes the repositories.
One enterprise can have multiple repository federations, but each repository can
belong to only one federation. Repository federations are best used in multi-
repository production environments where users share objects among the
repositories. We do not recommend creating federations that include production,
development, and test repositories, because object types and format definitions
change frequently in development and test environments, and these must be kept
consistent across the repositories in a federation.
The repositories in a federation can run on different operating systems and database
platforms. Repositories in a federation can have different server versions; however,
the client running on the governing repository must be version-compatible with the
member repository in order to connect to it.
Before you set up a repository federation, read the OpenText Documentum Platform
and Platform Extensions Installation Guide.
All repositories in a federation must project to the same connection brokers. When
you create a federation, Documentum Administrator updates the connection broker
projection information in the server configuration object for each member
repository. No manual configuration is necessary.
The repositories in a federation can run on different operating systems and database
platforms. Repositories in a federation can have different server versions; however,
the client running on the governing repository must be version-compatible with the
member repository in order to connect to it.
Field Description
Info tab
Name Type the name of the new federation.
Make All Governing Repository Users & Select to make all users and groups in the
Groups Global governing repository global users and global
groups.
Active This option is available if you are modifying
an existing federation. To change the status
of an existing federation, select or clear the
Active checkbox.
User Subtypes tab
Add Click Add to add user subtypes.
You can make a federation inactive by accessing the Info page of the federation and
clearing the Active checkbox.
Some of the keys in the file are dynamic that affect open sessions. Changing non-
dynamic keys affects only future sessions. By default, Documentum Server checks
every 30 seconds for changes in the dfc.properties file.
The dfc.properties keys and associated values are described in the dfcfull.properties
file, which is installed with DFC. The keys in the dfc.properties file have the
following format:
dfc.<category>.<name>=<value>
The <category> identifies the functionality, feature, or facility that the key controls.
When adding a key to the dfc.properties file, both the category and the name, in
addition to a value must be specifies. For example:
dfc.data.dir= <value>
dfc.tracing.enable= <value>
dfc.search.ecs.broker_count = <value>
For desktop applications and Documentum Server, the dfc.properties file is installed
in the C:\Documentum\Config directory on Windows hosts, and the
$DOCUMENTUM_SHARED/config directory on Linux hosts. The file can be edited
using any text editor.
The dfc.properties file is a standard Java properties file. If a key value has a special
character, use a backslash (\) to escape the character. In directory path
specifications, use a forward slash (/) to delimit directories in the path.
To specify a connection broker in the dfc.properties file, use the following keys:
dfc.docbroker.host[<n>]=<host_name>|<IP_address> #required
dfc.docbroker.port[<n>]=<port_number> #optional
The [<n>] is a numeric index, where <n> is an integer starting with 1. All keys for a
particular connection broker must have the same numeric index. If there are entries
for multiple connection brokers, DFC contacts the connection brokers in ascending
order based on the numeric index by default.
The <host_name> is the name of the machine on which the connection broker resides.
The <IP_address> is the IP address of the machine.
The port is the TCP/IP port number that the connection broker is using for
communications. The port number defaults to 1489 if it is not specified. If the
connection broker is using a different port, you must include this key.
The following example defines two connection brokers for the client and because
lapdog is not using the default port number, the port number is also specified in the
file:
dfc.docbroker.host[1]=bigdog
dfc.docbroker.port[1]=1489
dfc.docbroker.host[2]=lapdog
dfc.docbroker.port[2]=1491
Only the host specification is required. The other related keys are optional.
If you add, change, or delete the keys of a connection broker, the changes are visible
immediately. You do not have to restart your session.
• native
Only a native connections are allowed. If DFC cannot establish a native
connection, the connection attempt fails.
• secure
Only secure connection are allowed. If DFC cannot establish a secure connection,
the connection attempt fails.
• try_secure_first
A secure connection is preferred, but a native (unsecured) connection is
excepted. DFC attempts to establish a secure connection first. If it cannot, it tries
to establish a native connection. If that also fails, the connection attempt fails.
• try_native_first
A native connection is preferred, but a secure connection is also accepted. DFC
attempts to establish a native connection first. If it cannot, it tries to establish a
secure connection. If that also fails, the connection attempt fails.
Specifying a connection type in the application overrides the default setting in the
secure_connect_default key. OpenText Documentum Foundation Classes Development
Guide contains information on how to specify it in an application. For information
about configuring the server default for connections, read the Secure Connect Mode
property in “Modifying general server configuration information” on page 39.
• Default kill
The default kill provides the least disruption to the end user. The targeted
session terminates when it has no more open transactions and no more open
collections. The client remains functional.
• After current request kill
An after current request kill provides a safe and more immediate means of
terminating a session. If the session is not currently executing a request, the
session is terminated. Transactions can be rolled back and collections can be lost.
• Unsafe kill
An unsafe kill can be used to terminate a session when all other techniques have
failed and should be used with caution. It can result in a general server failure.
The method server itself is a Java-based web application. Each time a method is
invoked, the Documentum Server makes an HTTP request passing the name of the
Java class which implements the method along with any specified arguments to a
servlet which knows how to execute the specified method.
Note: The JMS can also be used to execute Java methods that are not associated
with a method object. Use an HTTP_POST administration method to send the
request to the Java method server. OpenText Documentum Server DQL Reference
Guide contains information on the administration method.
Each JMS is represented by a JMS configuration object. The Java Method Server
Configuration page displays information for all JMS configuration objects in the
repository.
The Tools menu on the Java Method Server Configuration page also provides the
option to view information about all active Java method servers. Select Tools >
Active Java Method Servers List to display the Active Java Method Servers List
page.
Field Description
Name The name of the JMS configuration.
Enable Enables or disables the JMS.
Field Description
Documentum Server location Specifies whether the JMS is associated with a RCS. Valid
values are:
• Java Method Server for primary Documentum Server: The
ACS is local.
• Java Method Server for secondary (or additional) Documentum
Server: The ACS is remote.
Add Click Add to add and associate a Documentum Server with
the JMS configuration.
Field Description
Associated Content Server The Documentum Server with which the JMS
cache is associated.
Last Refreshed Time The last time the Documentum Server JMS
cache was reset.
Incremental Wait Time on Failure The time Documentum Server waits before
contacting the JMS, if the JMS fails to
respond.
Field Description
Java Method Server in Use The name of the active Java method server
that was used last time.
Java Method Servers associated with the Documentum Server
Name The name of the JMS configuration object.
Is Enabled Specifies whether the Java method server is
enabled or disabled.
Status The current status of the JMS configuration
object.
Documentum Server location Specifies whether the JMS is associated with
a RCS. Valid values are:
• Java Method Server for primary
Documentum Server: The ACS is local.
• Java Method Server for secondary (or
additional) Documentum Server: The ACS is
remote.
Last Failure Time The last time the JMS failed to respond.
Next Retry Time The next time the Documentum Server tries
to contact the JMS, if the JMS fails to
respond.
Failure Count The number of times the JMS failed to
respond.
Methods are deployed as a BOF module and executed on a Java method server. The
methods must have the following property values:
Notes
/**
* Interface for Java Methods that are invoked by the
* Documentum Server.
*
*/
/**
* Serves as the entry point to a Java method (installed
* in application server) executed by the Content
* Server DO_METHOD apply method.
*
* @param parameters A Map containing parameter names as keys
* and parameter values as map values.The keys in the parameter
* are of type String.
* The values in the parameter map are of type String array.
* (This map corresponds to the string ARGUMENTS passed by the
* DO_METHOD apply call.)
*
* @param output OutputStream to be sent back as the
* HTTP response content, to be saved in the repository
* if SAVE_RESULTS was set to TRUE in the DO_METHOD apply call.
* NOTE: This output stream is NULL if the DO_METHOD was
* launched asynchronously. Always check for a null before
* writing to the OutputStream.
*/
Exception;
}
• Developers can package and deploy the Java server method implementation in
the same DAR file as the dm_method definition.
• The Java method is self contained and is stored automatically in the default
location for the module.
• The Java method server does not have to be restarted to implement the method.
3. Update the server configuration object for each server from which HTTP_POST
methods might originate.
Add the name of new servlet to the app_server_name property and the URI of
servlet to the app_server_uri property.
Use Documentum Administrator to modify the server configuration object.
4. Select Re-initialize.
It is not necessary for all users and groups in a repository to be managed through an
LDAP directory server. A repository can have local users and groups in addition to
the users and groups managed through a directory server. You can use more than
one LDAP directory server for managing users and groups in a particular
repository.
Using an LDAP server provides a single place for making additions and changes to
users and groups. Documentum Server runs a synchronization job to automatically
propagate the changes from the directory server to all the repositories using the
directory server.
The LDAP support provided by Documentum Server allows mapping LDAP user
and group attributes to user and group repository properties or a constant value.
When the user or group is imported into the repository or updated from the
directory server, the repository properties are set to the values of the LDAP
properties or the constant. The mappings are defined when Documentum Server
creates the LDAP configuration. The mappings can be modified later.
Apart from the unique identifiers, all the attributes that have been mapped in the
LDAP configuration object should also have read access in the directory server.
Select Administration > Basic Configuration > LDAP Servers to view all primary
LDAP servers that are configured to the repository. If there are no LDAP servers
configured to the repository, Documentum Administrator displays the message No
LDAP Server Configurations. “LDAP server configuration page properties”
on page 150 describes the properties that are displayed on the LDAP Server
Configuration page.
Field Value
Name The name of the LDAP configuration object.
Hostname The name of the host on which the LDAP
directory server is running.
Port The port number where the LDAP directory
server is listening for requests.
SSL Port The SSL port for the LDAP directory server.
Directory Type The directory type used by the LDAP
directory server.
Import Indicates if users and groups, groups and
member users, or users only are imported.
Sync Type Indicates if synchronization is full or
incremental.
Failover Indicates if failover settings have been
established for the primary server.
Enabled Indicates whether the LDAP server is active.
To create a new LDAP configuration, you need the following information about the
LDAP directory server:
• The name of the host where the LDAP directory server is running
• The port where the LDAP directory server is listening
• The type of LDAP directory server
• The binding distinguished name and password for accessing the LDAP directory
server
• The person and group object classes for the LDAP directory server
• The person and group search bases
• The person and group search filters
• The Documentum attributes that you are mapping to the LDAP attributes
Notes
• Make sure that the binding user has the List Contents and Read Property
(LCRP) permission on the deleted objects container for configuring the
LDAP objects. Otherwise, the deleted users in the Active Directory server are
shown as active LDAP users in the repository. If the optional job argument
for LDAP Synchronization, container_for_deleted is set, the argument
value will be used instead of Deleted Objects while checking for objects
deleted at Active Directory.
• To avoid the null pointer exception while using ldap_matching_rule_in_
chain for nested group synchronization iteration through a collection of
objects retrieved from Active Directory, make sure that you set the value of
ldap_matching_rule_in_chain to <false>. By default, the value is <true>.
Field Description
Name The name of the new LDAP configuration
object.
Options are:
• Sun ONE/Netscape/iPlanet Directory
Server (default)
• Microsoft Active Directory
• Microsoft ADAM
• Oracle Internet Directory Server
• IBM Directory Server
• Novell eDirectory
Caution
For the 7.0 release, use only the
hostname and not the IP address.
However, you can use both the
hostname and IP address in pre-7.0
releases.
Field Description
Binding Password The binding distinguished password used to
authenticate requests to the LDAP directory
server by Documentum Server or the check
password program.
Field Description
Validate SSL Connection If you selected Use SSL, click to validate that
a secure connection can be established with
the LDAP server on the specified port. If the
validation fails, the system displays an error
message and you cannot proceed further
until valid information is provided.
Perform these manual steps for SSL validation for 6.5x and earlier versions of
Documentum Servers:
1. Depending on the operating system (other than Windows 64-bit) on which the
application server is installed, copy all the jar files from $Application_root$/
WEB-INF/thirdparty/$osname$ to $Application_root$/WEB-INF/lib
For example, if the operating system on which the DA application is installed is
Windows, copy all the jar files from $Application_root$/WEB-INF/thirdparty/
win32/ to $Application_root$/WEB-INF/lib
If the operating system on which the application server is installed is Windows
64-bit and the application server is using 64-bit JDK, do the following:
2. Depending on the operating system (other than Windows 64-bit) on which the
application server is installed, copy all *.dll, (for Windows) or *.so (for Linux)
files from $Application_root$/WEB-INF/thirdparty/$osname$ to
$AppServer_root$/da_dlls
Note: If the da_dlls folder does not exist in the preceding location, create
it.
3. Set the path of the dlls in startup batch file of the application server.
Note: Make sure that you set the value of ssl_mode to 0 in the active
dm_ldap_config object.
• 2: Used for LDAP_SIMPLE_BIND.
Field Description
Import Specifies how users and groups are
imported. Available options are:
• Users and groups (default)
• Users only
• Groups & member users
Synchronize Nested Groups in the repository Select to synchronize the nested groups in
the repository.
Field Description
Sync Type Specifies how users and groups are
synchronized. Available options are:
• Full: Import all based on user/group
mappings (default)
• Incremental: Import only new or updated
user/groups/members
If Groups and member users is selected
in the Import field and a group was not
updated but any of the group members
were, the incremental synchronization is
updating users identified by the user
search filter.
The LDAP synchronization job must have at least read access to a unique identifier
on the directory server, as follows:
Apart from the unique identifiers, all the attributes that have been mapped in the
LDAP configuration object should also have read access in the directory server.
To activate the nested cyclic group feature, you must update the dm_docbase_
config object as follows:
API>
retrieve,c,dm_docbase_config
...
3c00014d80000103
API> append,c,l,r_module_name
SET> LDAP_CYCLIC_SAVE
...
OK
API> append,c,l,r_module_mode
SET> 1
...
OK
API> save,c,l
...
OK
API>
Field Description
User Object Class Type the user object class to use for searching
the users in the directory server.
User Search Base Type the user search base. This is the point in
the LDAP tree where searches for users start.
For example:
cn=Users,ou=Server,dc=sds,dc=inengvm1llc,
dc=corp,dc=test,dc=com.
Field Description
User Search Filter Type the person search filter. This is the
name of the filter used to make an LDAP
user search more specific. The typical filter is
cn=*
Search Builder Click to access the Search Builder page. This
page enables you to build and test a user
search filter. When finished, the User Search
Filter field is populated with the resulting
filter.
Group Object Class Type the group object class to use for
searching the groups in the directory server.
Typical values are:
• For Netscape and Oracle LDAP servers:
groupOfUniqueNames
• For Microsoft Active Directory: group
Group Search Base Type the group search base. This is the point
in the LDAP tree where searches for groups
start. For example:
cn=Groups,ou=Server,dc=sds,dc=inengvm1llc
,dc=corp,dc=test,dc=com.
Group Search Filter Type the group search filter. This is the name
of the filter used to make an LDAP group
search more specific. The typical filter is cn=*
Search Builder Click to access the Search Builder page. This
page enables you to build and test a group
search filter. When finished, the Group
Search Filter field is populated with the
resulting filter.
Property Mapping When a new configuration is added, this
table populates with the mandatory mapping
attributes. The mappings are dependent
upon the directory type. This table defines
the pre-populated attributes and their
mappings. All mapping types are LDAP
Attribute.
Add Click to access the Map Property page to add
an attribute. From there, select a
Documentum attribute, then select the LDAP
attribute to which the Documentum attribute
maps or type in a custom value.
Edit Select an attribute and then click Edit to
access the Map Property page. On the Map
Property page, edit the attribute properties.
Field Description
Delete Select an attribute and then click Delete to
remove an attribute. The system displays the
Deletion Confirmation page.
Repository Property Displays the repository property that is the
target of the mapping.
Type Identifies the source of the property: User or
Group.
Map To Displays which attributes on LDAP that the
property is mapped to.
Map Type Identifies the type of data: LDAP attribute,
expressions, or a fixed constant.
Mandatory Indicates if the mapping is mandatory for the
attribute.
Field Description
Failover Settings Use this section to enter settings for the
primary LDAP server.
Field Description
Retry Count The number of times Documentum Server
tries to connect to the primary LDAP server
before failing over to a designated secondary
LDAP server. The default is 3.
Mappings between LDAP attributes and repository properties are defined when you
create the LDAP configuration object. You can map the LDAP values to the
following properties:
If the sn (surname) of user is Smith and the given name is Patty, the preceding
expression resolves to [email protected]. The 1 at the end of given name
directs the system to only use the first letter of the given name.
You can specify an integer at the end of an LDAP attribute name in an expression to
denote that you want to include only a substring of that specified length in the
resolved value. The integer must be preceded by a pound (#) sign. The substring is
extracted from the value from the left to the right. For example, if the expression
includes ${sn#5} and the surname is Anderson, the extracted substring is Ander.
Note: Documentum Server does not support powershell or any other scripting
extensions for the expression notation.
Values of repository properties that are set through mappings to LDAP attributes
can only be changed either through the LDAP entry or by a user with superuser
privileges.
7.1.3.2 Building and testing user or group search filter using Search
Builder
Access the Search Builder page by clicking the Search Builder button on the
Mapping tab of the New LDAP Server Configuration or LDAP Server Configuration
Properties page.
The Search Builder page enables you to build and test a user or group search filter.
You can enter up to ten lines of search criteria. When finished, the User Search Filter
or Group Search Filter field is populated with the resulting filter.
3. In the Map To section, select the LDAP property to which the repository
property maps or type a custom value. Options are:
• Single LDAP Attributes: If selected, select an LDAP attribute from the drop-
down list.
Provide the information for the secondary LDAP server, as described in “Secondary
LDAP server page properties” on page 163.
Field Description
Name Enter the name of the secondary LDAP
server.
Hostname / IP Address Type the name of the host on which the
secondary LDAP directory server is running.
Port The port information is copied from the
primary LDAP server.
Binding Name The binding name is copied from the
primary LDAP server.
Binding Password Type the binding distinguished password
used to authenticate requests to the
secondary LDAP directory server by
Documentum Server or the check password
program.
Confirm Password Re-enter the binding password for
verification.
Bind to User DN The bind to user DN information is copied
from the primary LDAP server.
Field Description
Use SSL The SSL information is copied from the
primary LDAP server.
SSL Port The SSL port number is copied from the
primary LDAP server.
Certificate Location The certificate location is copied from the
primary LDAP server.
1. Modify the user login domain on page 204 attribute in the user object of all
Kerberos users to use the short domain name instead of the name of the LDAP
server.
For example, if a Kerberos user is part of the wdkdomain.com domain, change
the user login domain attribute to wdkdomain.
2. Change the user source on page 205 attribute in the user object to dm_krb for all
Kerberos users that are synchronized via LDAP, if the password is not in plug-
in format. Changing the user source attribute is optional.
1. Create an LDAP configuration object. Use the short domain name as the LDAP
configuration object name on page 150.
For example, if Kerberos users are part of the wdkdomain.com domain, create
an LDAP configuration object using wdkdomain as the LDAP configuration
object name.
2. Change the user source on page 205 attribute in the user object to dm_krb for all
Kerberos users that are synchronized via LDAP.
Only an administrator who is the installation owner can access the LDAP Certificate
Database Management node.
3. Modify the certification location field for each LDAP server configuration that
uses SSL to point to the same certificate database location.
By default, Documentum Server assigns the file path that is specified in the
ldapcertdb_loc object.
You can manage Managed Service Accounts (MSA) for Documentum services. Use
the instructions described in this chapter to configure MSA for starting or stopping
Documentum processes securely using MSA user account. The Documentum
processes include repository service, connection broker service, and JMS service.
8.1 Windows
8.1.1 Prerequisites
1. Configure the Active Directory Module for Powershell and install the Active
Directory Domain Services tools in a Windows Server machine. Microsoft
documentation contains the instructions.
2. Import the Active Directory Module in the Windows Server. For example:
> Import-Module ActiveDirectory
3. Create the MSA in the Active Directory manually (for example, myacc4) or
using the Active Directory Module for Poweshell. For example:
b. Create MSA.
> New-ADServiceAccount -Name myacc4 -Enabled $true
// DNSHostName : hostname.Primary DNS Suffix (for example,
win12r2orcl-001.ottest.local)
4. Verify if the new MSA, myacc4, is created in the Active Directory. For example,
fetch an Active Directory MSA details in Powershell:
> Get-ADUser -Server <IP address of Active Directory Server or DNS Host Name>
> Filter : SAMAccountName -like “myacc4”
5. Associate the new MSA, myacc4, with a target machine. For example:
> Add-ADComputerServiceAccount -Identity win12r2orcl-001 -ServiceAccount myacc4
7. Install the MSA you created in the local machine. Provide the permission to the
Server to retrieve password of MSA from the Active Directory. For example:
> Set-ADServiceAccount myacc4 -PrincipalsAllowedToRetrieveManagedPassword
win12r2orcl-001$
> Install-ADServiceAccount -identity myacc4;
10. Create the dm_ldap_config object using IAPI and associate with the Active
Directory.
11. Modify the filters and search appropriately in the LDAP configuration and then
run the LDAP synchronization job to synchronize the MSA user in
Documentum Server from Active Directory. For example:
retrieve,c,dm_ldap_config
12. Verify the synchronized user in the repository using IAPI. For example:
retrieve,c,dm_user where user_name=’myacc4’
Caution
Do not change user_password of the MSA user using IAPI, as it is
encrypted. Changing it could lead to corruption and cause issues with
the synchronization between Documentum Server and Active
Directory.
14. Associate the MSA with your services. For example, for Documentum Server,
associate with the Documentum repository, connection broker, and JMS
services.
15. Change the installation owner manually to a domain user account as follows:
c. Edit the server.ini file (if it has not changed). For example:
user_auth_target = <DOMAIN NAME> install_owner = <DOMAIN USER>
• [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
\DmServer<Docbasename>]
ObjectName=<DOMAIN NAME>\<DOMAIN USER>
• [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
\DmServer<Docbasename>]
ImagePath=C:\Documentum\product\16.7\documentum.exe -docbase name testenv -
security acl -init_file
C:\Documentum\dba\config\testenv\server.ini -run as service -install owner
myacc4$ -logfile
• [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DmMethodServer]
ObjectName=<DOMAIN NAME>\<DOMAIN USER>
• [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DmDocBroker]
ObjectName=<DOMAIN NAME>\<DOMAIN USER>
• [HKEY_LOCAL_MACHINE\SOFTWARE\Documentum\Server\16.4]
“DM_DMADMIN_USER"="<DOMAIN USER>"
"DM_DMADMIN_DOMAIN"="<DOMAIN NAME>"
• [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application
\Documentum]
• [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog]
• [HKEY_LOCAL_MACHINE\SOFTWARE\DOCUMENTUM\DOCBASES]
g. Run the following command and grant permission for the new installation
owner:
> icacls C:\Documentum /grant <DOMAIN NAME>\<DOMAIN USER>:F /t /q
After the completion of all the steps, make sure that all services are up and
running.
16. Make sure that Documentum connection broker and JMS services starts from
services.msc.
17. To start the repository using MSA, instead of using services.msc, open the
command prompt window and set the installation owner explicitly for your
MSA user. For example:
> C:\Documentum\product\16.7\bin\documentum.exe -docbase_name testusername -
security acl -init_file
If the repository fails to start, run the dm_crypto_boot utility and verify the
password.
18. Navigate to %DCTM_HOME%\bin and run the following command to start the
agent exec program:
> dm_agent_exec.exe -docbase_name testenv -docbase_owner Administrator
You can use IAPI as a trusted administrator user and run jobs such as
dm_UserRename, dm_LDAPSynchronization, and so on.
2. Run the following PowerShell Command let to remove the MSA from the local
machine:
> Remove-ADServiceAccount -identity <MSA name>
3. (Optional) Remove the service account from Active Directory. For example:
> Remove-ADComputerServiceAccount -Identity <the machine where MSA is assigned to> -
ServiceAccount <MSA>
Skip this step if you want to reassign an existing MSA from one machine to
another.
Notes
• When you want to create new dm_ldap_config object other than the one
already created, you must manually provide full access permission to the
MSA user to access the new .cnt file generated at C:\Documentum\dba\
config\testenv\. Otherwise, it results in the access denied error when
you run the LDAP synchronization job using the new LDAP configuration
object. This is because, Windows does not allow default user permissions on
a new file or folder.
• Configuration is for MSA user domain only. LDAP server or the Active
Directory different from MSA domain is not supported for MSA
configuration.
8.2 Linux
8.2.1 Prerequisites
1. Configure a Windows Server machine with Active Directory. Microsoft
documentation contains the instructions.
2. Make sure that all Linux servers where Documentum Server services need to be
managed are connected to the Windows Active Directory domain.
3. Install Documentum Server in a location where Active Directory user has full
access. For example: /opt/dctm/.
4. Make sure that you allow Secure Shell (SSH) or OpenSSH to access Linux
servers from Windows Active Directory machine.
3. Use the migration tool to change the installation owner (install_owner) from
local user (for example, dmadmin) to the Windows Active Directory domain
user.
4. Update the config.xml as needed and add the path of ojdbc.jar in the
MigrationUtil.sh launch script.
5. Create a RSA key pair (without password) to allow you to access the Linux
Servers from the Windows Server without the need to remember the password.
A MSA user gets the privileged access from the Windows Active Directory
machine to perform actions on Linux server where Documentum services are
installed.
8. You can use the example commands described in the following table to manage
Documentum services:
Action Command
Verify if the connection broker or Connection broker:
repository or JMS is running.
ssh documentum@<IP address of Linux
server> "ps -ef | grep [d]mdocbroker"
Repository:
ssh documentum@<IP address of Linux
server> "ps -ef | grep [d]ocumentum |
grep 'docbase_name msadoc"
JMS:
ssh documentum@<IP address of Linux
server> "ps -eaf | grep
[D]ctmServer_MethodServer"
Action Command
Stop the connection broker or repository Connection broker:
or JMS.
ssh msauser@<IP address of Linux server>
"opt/dctm/dba/dm_stop_DocBroker"
Repository:
ssh msauser@<IP address of Linux server>
"opt/dctm/dba/dm_shutdown_msadoc"
JMS:
ssh msauser@<IP address of Linux server>
"$DM_JMS_HOME/bin/stopMethodServer.sh"
Repository:
ssh msauser@<IP address of Linux server>
"export ORACLE_HOME=/opt/app/oracle/
product/<oracle_version>/db_1;opt/
dctm/dba/dm_start_msadoc"
JMS:
ssh msauser@<IP address of Linux server>
"$DM_JMS_HOME/bin/startMethodServer.sh"
9.1 Overview
With the OpenText Directory Services (OTDS) integration with Documentum Server,
the following are supported:
1. Create a partition.
Partition in OTDS is similar to LDAP configuration in Documentum Server. You
can create partition using AD server details, create user and group filters,
monitor protocol, perform AD to OTDS attributes mapping, and so on.
2. Create a resource.
Resource represents any ECM server (for example, Documentum server). In
resource, you can configure resource REST (Generic) server URL (http://
<JMS_Host>:<JMS_Port>/dmotdsrest), administrator credentials of repository,
and OTDS to repository attribute mappings.
Note: Make sure that the value of the user_name and __NAME__ resource
attributes in User Attributes Mappings are same.
3. (Optional) Select Actions > Include Groups of the access role to synchronize
groups.
4. Map the resources to partition in access role.
Access role is created automatically when creating a resource. In access role,
you need to add the partition and the users or groups that need to be pushed to
Documentum Server.
Caution
You must not consolidate resources manually using the Actions >
Consolidate option in the resources page because the synchronization of
the resource is automatic.
• (Optional) Provide the delimiter (for example, comma) you want to use to
separate the dm_ldap_config objects in
<docbase_name>_migrate_ldap_config as value for
ldap_configs_tokenizer. For example: ldap_configs_tokenizer=,
• Disable the dm_ldap_config objects whose users and groups need to be
migrated.
• Provide the name of the repository for <docbase_name> for
<docbase_name>_migrate_ldap_config. For example, repo1_migrate_
ldap_config. In addition, provide the names of dm_ldap_config object as
values for <docbase_name>_migrate_ldap_docbases. For example: repo1_
migrate_ldap_config=ldapconfig1,ldapconfig2
9.9 Troubleshooting
Use the following information, as appropriate:
Network locations are associated with server configuration objects and acs
configuration objects. The server configuration objects and acs configuration objects
contain information defining the proximity of a Documentum Server or ACS Server
to the network location. Documentum Server uses the information in the server
configuration objects and acs configuration objects and their associated network
locations to determine the correct content storage area from which to serve content
to a web client end user and to determine the correct server to serve the content.
Use the Administration > Distributed Content Configuration > Network Locations
navigation in Documentum Administrator to access the Network Locations list page.
From the Network Locations list page, you can create, copy, view, modify, and
delete network locations.
Note: When localhost is in the URL used from a browser on the application
server host to access an application server, it resolves to 127.0.0.1. Unless
127.0.0.1 is included in a network location, the correct network location is not
selected automatically. Therefore, when you create network locations, include
the IP address 127.0.0.1 in a network location if you want to:
Field Value
Network Location Identifier An identifier that is used by system and
network administrators. For example, to
identify network locations by network
subnets. This field cannot be edited after the
network location is created.
Subject A description of the network location.
Default Network Location Select to display the network location to
users whose IP address is not mapped to a
particular network location.
Field Value
Network Location Name A descriptive name of the network location.
For example, the geographical location of the
network, such as Paris, San Francisco. The
name is displayed on the login page for
Documentum web clients when users must
choose a network location. The display name
is not the object name. The display name can
be modified after the network location is
created.
IP Address Ranges The IP address range identified by the
network location. Each range must conform
to standard IP address conventions. A
network location may have multiple IP
address ranges. It is recommended that each
IP address is mapped to a single network
location, but if an IP address maps to
multiple physical locations, you may need to
map that address to multiple network
locations.
Network locations are used to determine which server provides content files to end
users. If the network location that you are deleting is associated with any BOCS or
ACS servers, users at those locations could not receive content in the most efficient
manner possible.
When you delete network locations, references to the network locations in existing
server configuration objects, acs configuration objects, bocs configuration objects,
and BOCS caching jobs are not automatically removed. You must manually remove
any references to the deleted network locations.
Each Documentum Server host installation has one ACS server that communicates
with one Documentum Server per repository and the Documentum Message
Services (DMS) server. A single ACS server can serve content from multiple
repositories. WDK-based applications can use the ACS server if the ACS server is
enabled in the app.xml file of the applications.
The ACS server is configured on the ACS Servers Configuration page under the
Administration > Distributed Content Configuration > ACS Servers node. The
ACS Servers Configuration page displays information about the ACS server, as
described in “ACS server configurations” on page 189.
Column Description
Name The name that is assigned to the ACS server
during the Documentum Server installation.
The name cannot be modified.
Documentum Server The name of the Documentum Server the
ACS server is associated with.
Content Access Specifies how the ACS server can access
content. Valid values are:
• Access all stores: The ACS server can
access all stores that are connected to the
Documentum Server.
• Access local stores only: The ACS server can
read content from local file stores, but is
unable to use Surrogate Get to request
content files it does not find in the local
file stores.
• None (disabled): The ACS server is
disabled.
Projections & Stores from Specifies the connection broker projections,
network locations, and local stores
information for the ACS server.
Description A description of the ACS server.
Dormancy Status The current dormancy status of the ACS
server. Valid values are:
• Dormant
• Active
Field Description
ACS Server Configuration
Name The name that is assigned to the ACS server
during the Documentum Server installation.
The name cannot be modified.
Associated Content Server The name of the Documentum Server that
the ACS server is associated with.
Description A description of the ACS server.
ACS Server Version The major and minor version of the ACS
server.
Valid values:
• 2.1: Specifies that the ACS server is part
of a Documentum version 6.5 SPx
repository.
• 2.2: Specifies that the ACS server is part
of a Documentum version 6.6 to 6.6
(Patch 21) repository.
• 2.3: Specifies that the ACS server is part
of a Documentum version 6.6 (Patch 22)
to the latest version.
Dormancy Status Indicates the dormancy status of ACS server.
Field Description
ACS Server Connections
Protocol The protocol the ACS server uses. Valid
values are http and https. Click Add to add a
protocol, or select a protocol from the list and
click Edit to modify it or Delete to remove
the protocol.
Base URL The base URL for the ACS server. The base
URL requires the following format:
<protocol>://<host>:<port>/ACS/servlet/ACS
Field Description
Source Specifies where to use projections and stores
for the ACS server. Valid values are:
• Associated Content Server: The ACS
server uses the connection broker
projections, network locations, and local
stores already configured for ACS.
• Settings entered here: You must enter the
ACS server uses connection brokers,
network locations, and near stores
manually.
If you select this option, an Add buttons
displays in the Connection Broker
Projections, Network Location
Projections, and Local Stores sections.
Connection Broker Projections
Target Host The host name of the server that hosts the
connection broker.
Field Description
Enabled Enables projections to the connection broker.
Network Location Projections
Network Location Specifies a network location in the global
registry of the Documentum Server host.
Field Description
Source Specifies where to use projections for the
ACS server. Valid values are:
• Associated Content Server: The ACS
server uses the connection broker
projections, network locations, and local
stores already configured for ACS.
When this option is selected, you cannot
modify the connection broker projections.
• Settings entered here: Select this option
to designate connection broker
projections.
If you select this option, an Add buttons
displays in the Connection Broker
Projections, Network Location
Projections, and Local Stores sections.
Connection Broker Projections
Target Host The host name of the server that hosts the
connection broker.
BOCS servers communicate only with ACS servers and DMS servers, but not
directly with Documentum Servers. Every BOCS server for Documentum 6 or later
repositories is associated with a dm_bocs_config object. The installation program for
the BOCS server does not create the object at installation time. The BOCS server
must be added manually, using the properties on the BOCS Servers Configuration
page. All BOCS configuration objects for Documentum 6 or later repositories reside
in the global registry in the /System/BocsConfig folder.
To create, modify, or view BOCS configuration objects, you must have superuser
privileges and be connected to the global repository that is associated with the DFC
installation. If you are not connected to the global repository and click the BOCS
Server node, the system displays an error message and provides a link to the login
page of the global registry repository.
Field Value
BOCS Server Configuration Info
Name The object name of the BOCS server.
Field Value
BOCS Server Version A numeric string that identifies the set of
distributed content features with which the
BOCS server is compatible. In some cases, a
set of features spans multiple product
versions. Valid values are:
• 1: The Content Access options are limited
to Read Only and None (disabled).
• 2.1: Compatible with Documentum
Server version 6.5 SPx and 6.6.
• 2.2: Compatible with Documentum
Server version 6.6 to 6.6 (Patch 21). This
value is used only when Atmos and ACS
Connector integration is available.
• 2.3: Compatible with Documentum
Server versions 6.6 (Patch 22) to 16.x. This
value is only valid if the actual version of
the installed BOCS server is from 6.6
(Patch 22) to the latest version.
Content Access Select an access type, as follows:
• Read and synchronous write: Select this
option for the BOCS server to support
read and synchronous write.
• Read, synchronous, and asynchronous
write: Select this option for the BOCS
server to support read, synchronous
write, and asynchronous write.
• None (disabled): The BOCS server is
disabled.
Network Locations Network locations served by the BOCS
server.
Field Value
Proxy URL The BOCS proxy URL. The URL can contain
up to 240 characters. The BOCS proxy URL is
a message URL that only DMS uses when
BOCS is in push mode.
BOCS Server Connections
Add Click to access the BOCS Server Connection
page to add a protocol and base URL for the
BOCS server.
Edit Select a communication protocol and then
click Edit to access the BOCS Server
Connection page to edit a protocol and base
URL for the BOCS server.
Delete To delete a BOCS server protocol and base
URL, select a communication protocol and
then click Delete.
Protocol and Base URL The communication protocols used by the
BOCS server to provide content to end users.
The HTTP and HTTPS protocols are
supported. The Base URL must be provided
when the BOCS server is created. It is in the
form:
<protocol>://<host>:<port>/ACS/servlet/ACS
The BOCS server configuration object in the global registry contains the public key
information and generates an electronic signature for the BOCS server to use when
contacting the DMS server. When the BOCS server connects to the DMS server in
pull or push mode, it sends its electronic signature to DMS where DMS matches the
electronic signature to the public key in the BOCS configuration object. If the DMS
server authenticates the BOCS electronic signature, the BOCS server can then pull or
push its messages from or to the DMS server respectively.
Field Value
Pull Mode Enabled Specifies whether the BOCS server
communicates with the DMS server using
the pull mode.
To access the BOCS Server Connection page, click Add or Edit in the BOCS Server
Connections section of the BOCS Server Configuration page.
Deleting the configuration object does not uninstall the BOCS servers; they must be
manually uninstalled from the hosts on which they are running. Without the
configuration object, the BOCS Server cannot provide content from this repository.
Note: You can create multiple messaging server configuration objects using
Documentum Administrator 7.0 on 7.0 and later versions of Documentum
Server and repositories. Use the File > New > Messaging Server Configuration
in the Messaging Server list page.
Use the Administration > Distributed Content Configuration > Messaging Server
navigation to access the Messaging Server Configuration list page.
To modify or view the DMS server configuration object, you must be connected to
the repository known to DFC on the Documentum Administrator host as a global
registry and you must be logged in as a user with superuser privileges.
Administrators logging in to a Documentum repository that is not the global registry
do not see a Messaging Server Configuration list page. If they click the Messaging
Server node, the system displays a message informing administrators that they
logged in to a non-global registry repository and the messaging server configuration
object is stored only in the global registry repository. The system also shows a link
for the administrator to click to navigate to the login page of the global registry
repository.
To save time retyping existing information, you can copy a messaging server
configuration using the Save As option. To copy a messaging server, you must be
connected to the repository known to DFC on the Documentum Administrator host
as a global registry and you must be logged in as a user with superuser privileges.
The User Management page contains links to the user management features that can
be configured for a repository, as described in “Users, groups, roles, and sessions”
on page 201.
Link Description
Users Accesses the Users page. From the Users
page you can
• Search for existing user accounts.
• Create, modify, and delete user accounts.
• View and assign group memberships for
a particular user account.
• Change the home repository for
particular user accounts.
Link Description
Roles Accesses the Roles page. From the Roles
page you can
• Search for existing roles.
• Create, modify, and delete roles.
• View current group memberships and
reassign roles.
11.2 Users
A repository user is a person or application with a user account that has been
configured for a repository. User accounts are created, managed, and deleted on the
User node. In a Documentum repository, user accounts are represented by user
objects. Whenever a new user account is added to a repository, Documentum Server
creates a user object. A user object specifies how a user can access a repository and
what information the user can access.
Field Description
User Name Filters the search results by user name.
Default Group Filters the search results the name of the
default group.
User Login Name Filters the search results by the login name of
the user.
User Login Domain Filters the search results by login domain.
Before you create users, determine what type of authentication the server uses. If the
server authenticates users against the operating system, each user must have an
account on the server host. If the server uses an LDAP directory server for user
authentication, the users do not need to have operating system accounts.
If the repository is the governing member of a federation, a new user can be a global
user. Global users are managed through the governing repository in a federation,
and have the same attribute values in each member repositories within the
federation. If you add a global user to the governing repository, that user is added to
all the member repositories by a federation job that synchronizes the repositories.
Field Value
State Indicates the user account state in the repository. Valid values are:
• Active: The user is a currently active repository user. Active
users are able to connect to the repository.
• Inactive: The user is not currently active in the repository.
Inactive users are unable to connect to the repository. The user
automatically moves to ACTIVE state after
auth_deactivation_interval minutes, if
auth_deactivation_interval is greater than zero.
The user is automatically assigned to INACTIVE state by
Documentum Server if the user exceeds max_auth_attemp
failure attempts within auth_failure_interval minutes.
• Locked: The user is unable to connect to the repository.
• Locked and inactive: The user is inactive and unable to connect
to the repository. The user moves to locked and inactive states,
if a locked user exceeds max_auth_attemp failure attempts
within auth_failure_interval minutes.
If the user is an operating system user, the user login name must
match the operating system name of the user. If the user is an
LDAP user, the user login name must match the LDAP
authentication name of the user.
User Login Domain Identifies the domain in which the user is authenticated. This is
typically a Windows domain or the name of the LDAP server
used for authentication.
Field Value
User Source Specifies how to authenticate user name and password of a given
repository user. Valid values depend on whether the repository
runs on a Linux or Windows server.
• None: The user is authenticated in a Windows domain.
• UNIX only: The user is authenticated using the default Linux
mechanism, dm_check_password or other external password
checking program.
• Domain only: The user is authenticated against a Windows
domain.
• UNIX first: This is used for Linux repositories where Windows
domain authentication is in use. The user is authenticated first
by the default Linux mechanism; if that fails, the user is
authenticated against a Windows domain.
• Domain first: This is used for Linux repositories where
Windows domain authentication is in use. The user is
authenticated first against a Windows domain; if that fails, the
user is authenticated by the default Linux mechanism.
• OTDS: The user is authenticated against OTDS.
• LDAP: The user is authenticated through an LDAP directory
server.
• Inline Password: The user is authenticated based on a password
stored in the repository. This option is available only when
Documentum Administrator is used to create users. It is not
available in other applications in which it is possible to create
users.
• dm_krb: The user is authenticated using Kerberos Single sign-
on (SSO).
Field Value
E-Mail Address The E-mail address of the user. This is the E-Mail address to
which notifications are sent for workflow tasks and registered
events.
User OS Name The operating system user name of the user.
Windows Domain The Windows domain associated with the user account or the
domain on which the user is authenticated. The latter applies if
Documentum Server is installed on a Linux host and Windows
domain authentication is used.
Home Repository The repository where the user receives notifications and tasks.
User is global If the user is created in the governing repository of a federation,
select this option to propagate the user account to all members of
the federation.
Restrict Folder Access Specifies which folders the user can access. Click Select to specify
To a cabinet or folder. Only the selected cabinets and folders display
for the user. The other folders do not display but the user can
access the folders using the search or advanced search options.
Field Value
Privileges The privileges that are assigned to the user.
Field Value
Client Capability Describes the expertise level of the user.
Alias Set The default alias set for the user. Click Select to specify an alias
set.
Disable Workflow Indicates whether a user can receive workflow tasks.
Disable Authentication If selected, user can exceed the number of failed logins specified in
Failure Checking the Maximum Authentication Attempts field of the repository
configuration object.
Global users can also have local attributes, which you can modify in a local
repository.
• DefaultFolderAcl
Point this alias to the permission set to be applied to the new folder created
for new users.
• DefaultFolderAclDomain
Point this alias to the user who owns the permission set you use for the
DefaultFolderAcl alias.
When you add a user, Documentum Administrator applies the permission set you
designate to the new folder. If a new user is not present as an accessor in the
permission set, the user is granted write permission on the folder. The permission
set for the folder is then modified to a system-generated permission set, but it
otherwise has the permissions from the permission set you created.
You can use Documentum Administrator to create a default folder for an existing
user and permissions on the set are applied if you have created the necessary alias
set and aliases.
If the UserPropertiesConfiguration alias set does not exist and a superuser creates
the user, the user owns the folder and has delete permission. If a system
administrator creates the user, the user is not the owner of the default folder, but the
user has change owner permission on the folder as well as write permission.
• Authentication type
If the server authenticates users against the operating system, each user must
have an account on the server host. If the server uses an LDAP directory server
for user authentication, the users do not need to have operating system accounts.
• Groups and ACLs
If you specify the attributes user_group (the default group of user) and acl_name
(the default permission set of user), any groups and permission sets must already
exist before you import the users.
• Passwords
If you are creating a user who is authenticated using a password stored in the
repository, the password cannot be assigned in the input file. You must assign
the password manually by modifying the user account after the user has been
imported..
Field Value
State Indicates the state of user in the repository.
Select one of the following:
• Active: The user is a currently active
repository user. Active users are able to
connect to the repository.
• Inactive: The user is not currently active in
the repository. Inactive users are unable
to connect to the repository.
Field Value
User Source Specifies how to authenticate user name and
password of a given repository user. Valid
values are:
• None: Windows only. This means the user
is authenticated in a Windows domain.
• UNIX only: The user is authenticated
using the default Linux mechanism,
dm_check_password or other external
password checking program. This option
only displays on Linux hosts.
• Domain only: The user is authenticated
against a Windows domain. This option
only displays on Linux hosts.
• UNIX first: This is used for Linux
repositories where Windows domain
authentication is in use. This option only
displays on Linux hosts.
The user is authenticated first by the
default Linux mechanism. If the
authentication fails, the user is
authenticated against a Windows
domain.
• Domain first: This is used for Linux
repositories where Windows domain
authentication is in use. This option only
displays on Linux hosts.
The user is authenticated first against a
Windows domain; if that fails, the user is
authenticated by the default Linux
mechanism.
• LDAP: The user is authenticated through
an LDAP directory server.
Description A description of the user account.
E-Mail Address The E-mail address of the user. This is the E-
Mail address to which notifications are sent
for workflow tasks and registered events.
Windows Domain (Windows only) The domain name
associated with the Windows account of new
user.
Home Repository The repository where the user receives
notifications and tasks.
User is global If the user is created in the governing
repository of a federation, select this option
to propagate the user account to all members
of the federation.
Field Value
Default Folder The default storage place for any object the
user creates. Click Select to assign a folder.
Default Group The group that is associated with the default
permission set of the user. Click Select to
specify a default group.
Field Value
Client Capability Indicates what expertise level of the user.
This option is for informative purposes only
and is not associated with any privileges.
Documentum Server does not recognize or
enforce these settings.
Alias Set The default alias set for the user. Click Select
to specify an alias set.
If user exists, then overwrite user Select this option if a user already exists in
information the repository and you want to replace
existing information with the imported
information.
If user exists, then ignore information Select this option if a user already exists in
the repository and you want to retain the
existing information.
Each imported user starts with the header object_type:dm_user. Follow the header
with a list of attribute_name:attribute_value pairs. The attributes user_name and
user_os_name are required. In addition, the following default values are assigned
when the LDIF file is imported:
Argument Default
user_login_name username
privileges 0 (None)
folder /username
group docu
client_capability 1
object_type:dm_user
user_name:Pat Smith
user_group:accounting
acl_domain:smith
acl_name:Global User Default ACL
object_type:dm_user
user_name:John Brown
If the ldif file contains umlauts, accent marks, or other extended characters, store the
file as a UTF-8 file, or users whose names contain the extended characters are not
imported.
The attributes you can set through the LDIF file are:
user_name
user_os_name
user_os_domain
user_login_name
user_login_domain
user_password
user_address
user_db_name
user_group_name
user_privileges (set to integer value)
default_folder
user_db_name
description
acl_domain
acl_name
user_source (set to integer value)
home_docbase
user_state (set to integer value)
client_capability (set to integer value)
globally_managed (set to T or F)
alias_set_id (set to an object ID)
workflow_disabled (set to T or F)
user_xprivileges (set to integer value)
failed_auth_attempt (set to integer value)
You can specify as many of the attributes as you wish, but the attribute_names must
match the actual attributes of the type.
The attributes may be included in any order after the first line
(object_type:dm_user). The Boolean attributes are specified using T (for true) or F
(for false). Use of true, false, 1, or 0 is deprecated.
Any ACLs that you identify by acl_domain and acl_name must exist before you run
the file to import the users. Additionally, the ACLs must represent system ACLs.
They cannot represent private ACLs.
Any groups that you identify by user_group_name must exist before you run the file
to import the users.
Documentum Server creates the default folder for each user if it does not already
exist.
When you delete a user, the server does not remove the users name from objects in
the repository such as groups and ACLs. Consequently, when you delete a user, you
must also remove or change all references to that user in objects in the repository.
You can delete a user and then create a user with the same name. If you add a new
user with the same name as a deleted user and have not removed references to the
deleted user, the new user inherits the group membership and object permissions
belonging to the deleted user.
Notes
• All users have permissions to update or remove their own profile pictures
only.
The format (for example, PNG) of the user profile picture must comply with the
Documentum Server-supported image formats where the MIME type of the
dm_format object must be image/*. The size of the user profile picture must not
exceed 5 MB.
In a multi-server distributed setup, you must modify the location to UNC path or
NFS/CIFS share so that all Documentum Server instances can access the user profile
picture. In addition, Documentum Server provides audit events that can be
configured for tracking all operations related to the user profile picture.
The DFC events for managing the user profile picture are dm_user_updatepicture,
dm_user_removepicture, and dm_user_retrievepicture. “DFC events” on page 628
contains more information.
11.3 Groups
A group represents multiple repository users, and can contain groups, users, or
roles. By default, a group is owned by the user who creates the group. Groups can
be public or private. By default, groups created by a user with Create Group
privileges are private, while groups created by a user with system administrator or
superuser privileges are public.
A group can be a dynamic group. A dynamic group is a group, of any group class,
whose list of members is considered a list of potential members.
To create or modify groups, you must have privileges as shown in the following
table:
The name assigned to a group must consist of characters that are compatible with
the Documentum Server OS code page.
If you create a role as a domain, it is listed on the groups list, not the roles list.
To jump to a particular group, type the first few letters of its object name in the
Starts with box and click Search. To view a list of all groups beginning with a
particular letter, click that letter. To view a different number of groups than the
number currently displayed, select a different number in the Show Items list.
You can use dynamic groups to model role-based security. For example, suppose
you define a dynamic group called EngrMgrs. Its default membership behavior is to
assume that users are not members of the group. The group is granted the privileges
to change ownership and change permissions. When a user in the group accesses the
repository from a secure application, the application can issue the session call to add
the user to the group. If the user accesses the repository from outside your firewall
or from an unapproved application, no session call is issued and Documentum
Server does not treat the user as a member of the group. The user cannot exercise the
change ownership or change permissions permits through the group.
The first set of privileged groups are used in applications or for administration
needs. With two exceptions, these privileged groups have no default members when
they are created. You must populate the groups. The following table describes these
groups:
Group Description
dm_browse_all Members of this group can browse any
cabinets and folders in the repository, folders
except the rooms that were created using
Documentum Collaborative Services (DCS).
Group Description
dm_retention_users Members of this group can add retainers
(retention policies) to SysObjects.
The second set of privileged groups are privileged roles that are used internally by
DFC. You cannot add or remove members from these groups. The groups are:
• dm_assume_user
• dm_datefield_override
• dm_escalated_delete
• dm_escalated_full_control
• dm_escalated_owner_control
• dm_escalated_full_control
• dm_escalated_relate
• dm_escalated_version
• dm_escalated_write
• dm_internal_attrib_override
• dm_user_identity_override
Field Value
Name The name of the repository group.
Group Native Room The native room of group. This field appears
only if the rooms feature of Collaborative
Services is enabled.
E-Mail Address The email address for the new group.
Field Value
Administrator Specifies a user or group, in addition to a
superuser or the group owner, who can
modify the group. If this is null, only a
superuser and the group owner can modify
the group.
Field Value
Protected Indicates if the group is protected against
adding or deleting members. Use of a
protected dynamic group is limited to
applications running with a DFC installation
that has been configured as privileged
through the Documentum Administrator
client rights administration.
11.4 Roles
A role is a type of group that contains a set of users or other groups that are assigned
a particular role within a client application domain.
If you create a role as a domain, it is listed in the groups list, not the roles list.
Field Value
Name The name of the repository role.
Group Native Room The native room for the role. The field
appears only if the rooms feature of
Collaborative Services is enabled.
E-Mail Address The email address for the new role. This is
typically the email address of the owner of
role.
Field Value
Role Is Global If the role is being created in the governing
repository of a federation, select to propagate
the attributes of role to all members of the
federation.
Description A description of the role.
Private Defines whether the role is private. If not
selected, the role is created as a public role.
By default, module roles are dynamic. A dynamic module role is a role whose list of
members is considered a list of potential members. User membership is controlled
on a session-by-session basis by the application at runtime. The members of a
dynamic module role comprise of the set of users who are allowed to use the
module role; but a session started by one of those users will behave as though it is
not part of the module role until it is specifically requested by the application.
Administrators should not modify module roles unless they are configuring a client
that requires privileged escalations.
Field Value
Name The name of the repository module role.
Group Native Room The native room for the module role. The
field appears only if the rooms feature of
Collaborative Services is enabled.
E-Mail Address The email address for the module role.
11.6 Sessions
A repository session is opened when an end user or application establishes a
connection to a server. Each repository session has a unique ID.
During any single API session, an external application can have multiple repository
sessions, each with a different repository or server or both.
A repository session is terminated when the end user explicitly disconnects from the
repository or the application terminates.
The Sessions page lists sessions in the current repository. For each session, the name,
Documentum Session ID, Database Session ID, Client Host, Start Time, time Last
Used, and State are displayed. To view all sessions or user sessions, make a selection
from the drop-down list. To view a different number of sessions, select a new
number from the Show Items drop-down list. To view the next page of sessions,
click the > button. To view the previous page of sessions, click the < button. To jump
to the first page of sessions, click the << button. To jump to the last page, click>>.
Field Description
Root Process Start Date The last start date for the server to which the
session is connected
Root Process ID The process ID of the server on its host
User Name The session user
Client Host The host from which the session is connected
Session ID The ID of the current repository session
Database Session ID The ID of the current database session
Session Process ID The operating system ID of the current
session process
Start Time The time the session was opened
Last Used The time of the last activity for the session
Session Status The status of the current session
Client Library Version The DMCL version in use
User Authentication The authentication type
Shutdown Flag An internal flag
Client Locale The preferred locale for repository sessions
started during an API session
Basic permissions grant the ability to access and manipulate content of an object. The
seven basic permission levels are hierarchical and each higher access level includes
the capabilities of the preceding access levels. For example, a user with Relate
permission also has Read and Browse.
Extended permissions are optional, grant the ability to perform specific actions
against an object, and are assigned in addition to basic permissions. The six levels of
extended permissions are not hierarchical, so each must be assigned explicitly. Only
system administrators and superusers can grant or modify extended permissions.
When a user tries to access an object, Documentum Server first determines if the
user has the necessary level of basic permissions. If not, extended permissions are
ignored.
Permission sets (also known as access control lists, or ACLs) are configurations of
basic and extended permissions assigned to objects in the repository that lists users
and user groups and the actions they can perform. Each repository object has a
permission set that defines the object-level permissions applied to it, including who
can access the object. Depending on the permissions, users can create new objects;
perform file-management actions such as importing, copying, or linking files; and
start processes, such as sending files to workflows.
Each user is assigned a default permission set. When a user creates an object, the
repository assigns the default permission set of user to that object. For example, if
the default permission set gives all members of a department Write access and all
other users Read access, then those are the access levels assigned to the object.
Users can change access levels of object by changing the permission set of an object.
To do this, the user must be the owner of object (typically the owner is the user who
created the object) or they must have superuser privileges in the repository of object.
The ability to modify permission sets depends on the user privileges in the
repository:
• Users with superuser privileges can modify any permission set in the repository.
They can designate any user as the owner of a permission set, and change the
owner of a permission set. This permission is usually assigned to the repository
administrator.
• Users with system administrator privileges can modify any permission set
owned by them or by the repository owner. They can designate themselves or the
repository owner as the owner of a permission set they created and can change
whether they or the repository owner owns the permission set. This permission
is usually assigned to the repository administrator.
• Users with any privileges less than the superuser or system administrator
privileges are the owner only of the permission sets they create. They can modify
any permission set they own, but cannot change the owner of the permission set.
If you designate the repository owner as the owner of a permission set, that
permission set is a System (or Public) permission set. Only a superuser, system
administrator, or the repository owner can edit the permission set. If a different user
is the owner of the permission set, it is a Regular (or Private) permission set. It can
be edited by the owner, a superuser, system administrator, or the repository owner.
A user with Write or Delete permission can change which permission set is assigned
to an object.
If you use Documentum Web Publisher and if the user does not assign the default
permission set, the Documentum Server assigns a default permission set according
to the setting in the default_acl property in the server configuration object.
• Target folder to create, import, copy, or link an object into the folder.
• Source folder to move or delete an object from a folder.
Folder security only pertains to changing the contents in a folder. For example, a
user with Browse permission on a folder can still check out and check in objects
within the folder.
If you use Documentum Web Publisher, and if folder security is used in a repository,
any content files in the WIP state must have the same permission as the folder. To
use the same folder permission, the administrator must make sure that the lifecycle
in WIP state does not apply any set ACL action. For example:
WIP - folder acl
Staging - WP "Default Staging ACL"
Approved - WP "Default Approved ACL"
Notes
1. Checks for a basic access permission or extended permission entry that gives the
user the requested access level (Browse, Read, Write, and so forth)
Note: Users are always granted Read access if the user owns the document,
regardless of whether there is an explicit entry granting Read access or not.
2. Checks for no access restriction or extended restriction entries that deny the user
access at the requested level.
A restricting entry, if present, can restrict the user specifically or can restrict
access for a group to which the user belongs.
3. If there are required group entries, the server checks that the user is a member of
each specified group.
4. If there are required group set entries, the server checks that the user is a
member of at least one of the groups specified in the set.
If the user has the required permission, with no access restrictions, and is a
member of any required groups or groups sets, the user is granted access at the
requested level.
When a user is the object owner, Documentum Server evaluates the entries in the
permission set of object in the following manner:
1. Checks if the owner belongs to any required groups or a required group set.
If the owner does not belong to the required groups or group set, then the owner
is allowed only Read permission as their default base permission, but is not
granted any extended permissions.
2. Determines what base and extended permissions are granted to the owner
through entries for dm_owner, the owner specifically (by name), or through
group membership.
3. Applies any restricting entries for dm_owner, the owner specifically (by name),
or any groups to which the owner belongs.
4. The result constitutes the base and extended permissions of owner.
• If there are no restrictions on the base permissions of the owner and the
dm_owner entry does not specify a lower level, the owner has Delete
permission by default.
• If there are restrictions on the base permission of the owner, the owner has
the permission level allowed by the restrictions. However, an owner will
always have at least Browse permission; they cannot be restricted to None
permission.
• If there are no restrictions on the extended permissions of owner, they have,
at minimum, all extended permissions except delete_object by default. The
owner may also have delete_object if that permission was granted to
dm_owner, the user specifically (by name), or through a group to which the
owner belongs.
• If there are restrictions on the extended permissions of owner, then the
extended permissions of owner are those remaining after applying the
restrictions.
When Documentum Server evaluates the access of superuser to an object, the server
does not apply AccessRestriction, ExtendedRestriction, RequiredGroup, or
RequiredGroupSet entries to the superuser. A base permission of superuser is
determined by evaluating the AccessPermit entries for the user, for dm_owner, and
for any groups to which the user belongs. The superuser is granted the least
restrictive permission among those entries. If that permission is less than Read, it is
ignored and the superuser has Read permission by default.
The extended permissions of a superuser are all extended permits other than
delete_object plus any granted to dm_owner, the superuser by name, or to any
groups to which the superuser belongs. This means that the extended permissions of
superuser may include delete_object if that permit is explicitly granted to
dm_owner, the superuser by name, or to groups to which the superuser belongs.
Column Description
Name The object name of the permission set.
Owner The user or group that owns the permission
set.
Class Specifies set how the permission set is used:
• Regular: The permission set can only be
used by the owner.
• Public: The permission set can be used by
any user.
Description A description of the permission set.
Whether the server uses the user-specific domain or the default domain for
authentication depends on whether the server is running in domain-required mode.
By default, the Documentum Server installation procedure installs a repository in
no-domain required mode. If a Windows configuration has only users from one
domain accessing the repository, the no-domain required mode is sufficient. If the
users accessing the repository are from varied domains, using domain-required
mode provides better security for the repository.
In domain-required mode, the combination of the login name and domain or LDAP
server name must be unique in the repository. It is possible to have multiple users
with the same user login name if each user is in a different domain.
Trusted logins do not require a domain name even if the repository is running in
domain-required mode.
Privileged DFC provides a client rights domain that is implemented using a client
rights domain object (dm_client_rights_domain). The client rights domain contains
the Documentum Servers that share the same set of client rights objects
(dm_client_rights). The client rights domain is configured in a global repository that
acts as a governing repository. Multiple repositories can be grouped together under
the global repository to share privileged DFC information. The global repository
propagates all changes in the client rights objects to the repositories that are
members of the domain.
Privileged DFC is the term used to refer DFC instances that can invoke escalated
privileges or permissions for a particular operation. For example, privileged DFC
can request to use a privileged role for an application to perform an operation that
requires higher permissions or a privilege.
A client rights domain is a group of repositories that share the same client rights.
The group of repositories in a client rights domain is typically governed by a global
repository (global registry). The following rules and restrictions apply to a client
rights domain and repository members of that domain:
Each installed DFC has an identity, with a unique identifier extracted from the PKI
credentials. The first time an installed DFC is initialized, it creates its PKI credentials
and publishes its identity to the global registry known to the DFC. In response, a
client registration object and a public key certificate object are created in the global
registry. The client registration object records the identity of the DFC instance. The
public key certificate object records the certificate used to verify that identity.
Column Description
Client Name The name of the DFC client.
Client ID A unique identifier for the DFC client.
Host Name The name of the host on which the DFC
client is installed.
Approved Indicates if the given DFC client is approved
to perform privilege escalations.
Manage Clients The Manage Client button displays the
Manage Client page, which lists all DFC
clients that are registered in the global
registry.
Notes
• If the DFC client does not have the global registry information configured,
that is valid dfc.globalregistry.repository, dfc.globalregistry.username and
dfc.globalregistry.password, then it will not be displayed in the list of DFC
clients.
Column Description
Client Name The name of the DFC client.
Client ID A unique identifier for the DFC client.
Host Name The name of the host on which the DFC
client is installed.
Creation Date The creation date of the DFC client.
Field Value
Client Name The name of the DFC client.
Client ID The unique identifier for the DFC client.
Host Name The name of the host on which the DFC
client is installed.
Client Privilege Indicates whether the DFC client is approved
to perform privilege escalations.
Trusted Login Specifies whether the client is allowed to
create sessions for users without user
credentials.
Note: You cannot delete a certificate that is also used by another DFC client.
Administrator access set definitions reside in the global registry. The access sets do
not conflict with Documentum Server privileges. Object level and user level
permissions and permission sets take precedence over administrator access sets. In
general, administrator access sets control node access as follows:
• Users are not assigned an administrator access set and do not have superuser
privileges, cannot access administration nodes.
• Users who are assigned an administration access set, but do no have superuser
privileges, can only access the nodes that are enabled in their administration
access set.
• Users with superuser privileges and at least coordinator client capabilities are not
affected by administrator access sets. These users always have access to the entire
administration node.
• The Groups node is always enabled for users with Create Group privileges.
• The Types node is always enabled for users with Create Type privileges.
The list of available roles is retrieved from the repository to which the administrator
is connected. To ensure that administrator access sets function correctly across an
application, the roles associated with the administrator access sets must exist in all
repositories. If the same role name exists in both the global repository and a non-
global repository, the user of the role would see the nodes as per the administrator
access specified in the global repository. Even if the user is able to see the nodes, the
user can perform operations only with sufficient privileges.
Note: The following Administration nodes are currently not available for the
administrator access set functionality:
• Privileged Clients
Field Value
Name Name of the administrator access set. The
administrator access set name must be
unique. After creating and saving an
administrator access set, the name cannot be
modified.
Description Description of the administrator access set.
Nodes Select one or more node options to designate
the nodes that users with this administrator
access set can access. At least one node must
be selected for an administrator access set.
The available node options are:
• Basic Configuration
• LDAP Server Configuration
• Java Method Servers
• User Management
• Audit Management
• Jobs and Methods
• Content Objects
• Storage Management
• Content Delivery
• Index Management
• Content Intelligence
• Content Transformation Services
• Resource Management
Field Value
Assigned Role Indicates the role assigned to the
administrator access set. If the role does not
exist in the connected repository, the role is
displayed in a red font.
When a Documentum Server starts, it loads the plug-ins found in its repository-
specific directory first and then those located in the base directory. If two or more
plug-ins loaded by the server have the same identifier, only the first one loaded is
recognized. The remaining plug-ins with the same name are not loaded.
When you want to use a plug-in to authenticate a particular user regularly, set the
User Source property of that user account to the plug-in identifier.
Plug-in identifiers are defined as the return value of the dm_init method in the
interface of the plug-in module. A plug-in identifier must conform to the following
rules:
• It cannot use the prefix dm_. (This prefix is reserved for Documentum.)
• myauthmodule
• authmodule1
• auth4modul
The RSA plug-in for Documentum allows Documentum Server to authenticate users
based on RSA ClearTrust Single sign-on tokens instead of passwords.
Platform File
Windows dm_netegrity_auth.dll
Linux dm_netegrity_auth.so
The directory also includes the CA Siteminder header files and libraries, to allow
you to rebuild the plug-in.
To use the plug-in after you install it, include the plug-in identifier in connection
requests or set the user_source property in users to the plug-in identifier.
“Identifying a plug-in” on page 248, contains more information.
Note: The dm_netegrity plug-in user authentication fails when the SiteMinder
WebAgent uses NTLM authentication instead of BasicPassword authentication
to protect a web resource, such as Webtop. The dm_netegrity plug-in is
currently unable to match the user distinguished name (DN).
This section describes the steps in the authentication flow using CAS plug-in.
1. REST negotiates Proxy Ticket from CAS and generates a CAS proxy ticket for
Target Service name.
2. REST makes a connection to the repository using the credentials.
• Username: Name of the user for whom the proxy ticket is generated.
• Password: Proxy ticket as password. The format of the password is
DM_PLUGIN=dm_cas/M_PLUGIN=dm_cas/<proxy ticket>.
3. New CAS plug-in of Documentum Server calls CAS to validate Proxy Ticket.
The CAS plug-in sends a HTTP request to CAS URL, http://<cas server url>/
proxyValidate with two request parameters:
• service: This should match the Target Service parameter sent for generating
the proxy ticket.
• ticket: This contains the proxy ticket.
4. The CAS server sends a response if the proxy ticket validation is successful.
<cas:serviceResponse xmlns:cas='http://www.yale.edu/tp/cas'>
<cas:authenticationSuccess>
<cas:user>username</cas:user>
<cas:proxies>
<cas:proxy>https://10.37.10.28:8443/pgtCallbackcas:proxy>
https://10.37.10.28:8443/pgtCallback</cas:proxy>
</cas:proxies>
</cas:authenticationSuccess>
</cas:serviceResponse>
5. If the user name matches with CAS user name for which proxy ticket is
generated, Documentum Server returns REST, a session in the context of
requested user.
Note: If the client requires the login ticket to be valid across all the repositories,
then a global login ticket must be used. The requirements for generating global
login ticket are:
• All trusted repositories because global login ticket is valid only if receiving
repository and issuing repository are trusted.
• Identical login ticket in the receiving repository and the global ticket
generating repository.
As part of custom configurations for the CAS plug-in, the CAS plug-in is configured
using the dm_cas_auth.ini file placed at <%DOCUMENTUM%>\dba\auth
directory.
The “Parameters for configuring CAS” on page 252 table lists the various parameters
required for configuring CAS.
Name Description
server_host Host name that resolves to the CAS server.
server_port HTTP port number for connection with CAS
server.
url_path URL path used to send HTTP request to CAS
server to validated proxy ticket.
service_param Service name for which the proxy ticket is
generated.
As part of custom configurations for the CAS plug-in, the CAS plug-in also supports
authentication for LDAP users and requires customization.
The CAS server is configured to synchronize with the LDAP server. The new
dmCSLdapUserDN attribute is used for authentication response that exposes the user
LDAP distinguished name (DN).
3. After the Documentum Server is registered into CAS for proxy, set the
customized attribute dmCSLdapUserDN in the list of allowedAttributes.
For example, here is the sample of in-memory service registry setting:
<bean id="serviceRegistryDao"
class="org.jasig.cas.services.InMemoryServiceRegistryDaoImpl">
<property name="registeredServices">
<list>
<!--the following registered service is for Documentum Server, as
an example -->
<property name="id" value="1" />
<property name="name" value="Documentum Server proxy service" />
<property name="description" value="Allows CAS Proxy for
Documentum Server repositories" />
<property name="serviceId" value="ContentServer" />
<property name="evaluationOrder" value="0" />
<property name="allowedAttributes">
<list><value>dmCSLdapUserDN</value><list>
</property>
</bean>
<!-- other settings, for HTTP/HTTPS, IMAP and others -->
<bean ..../>
</list>
</property>
</bean>
Authentication plug-ins that require root privileges to authenticate users are not
supported. If you want to write a custom authentication mechanism that requires
root privileges, use a custom external password checking program.
Caution
This section outlines the basic procedure for creating and installing a
custom authentication plug-in. Documentum provides standard support
for plug-ins that are created and provided with the Documentum Server
software, as part of the product release. For assistance in creating,
implementing, or debugging custom rules, contact OpenText Global
Technical Services for service and support options to meet your
customization needs.
The inPropBag and outPropBag parameters are abstract objects, called property
bags, used to pass input and output parameters. The methods have the following
functionality:
• dm_init
The dm_init method is called by Documentum Server when it starts up. The
method must return the plug-in identifier for the module. The plug-in identifier
should be unique among the modules loaded by a server. If it is not unique,
Documentum Server uses the first one loaded and logs a warning in the server
log file.
• dm_authenticate
The dm_authenticate_user method performs the actual user authentication.
• dm_change_password
The dm_change_password method changes the password of a user.
• dm_plugin_version
The dm_plugin_version method identifies the version of the interface. Version
1.0 is the only supported version.
• dm_deinit
The dm_deinit method is called by Documentum Server when the server shuts
down. It frees up resources allocated by the module.
the dmauthplug.h header file contains detailed comments about each of the interface
methods. The header file resides in the %DM_HOME%\install\external_apps
\authplugins\include\dmauthplug.h ($DM_HOME/install/external_apps/
authplugins/include/dmauthplug.h) directory. All authentication plug-ins must
include this header file.
Additionally, all plug-ins must link to the dmauthplug.lib file in the %DM_HOME%
\install\external_apps\authplugins\include ($DM_HOME/install/external_apps/
authplugins/include) directory.
12.12.1.6.2 Internationalization
An authentication plug-in can use a code page that differs from the Documentum
Server code page. To enable that, the code page must be passed in the output
property bag of the dm_init method. If the code page is passed, Documentum Server
translates all parameters in the input property bag from UTF-8 to the specified code
page before calling the dm_authenticate_user or dm_change_password methods.
The server also translates back any error messages returned by the plug-in. A list of
supported code pages is included in the header file, dmauthplug.h.
• Simple signoffs
Simple signoffs are the least rigorous way to enforce an electronic signature
requirement. No additional Documentum Server license is needed. “Customizing
simple signoffs” on page 261 contains the instructions on using and customizing
simple signoffs.
Note: We recommend that you backup any existing custom templates before
upgrading to a new template.
By default, each signature page can have six signatures. Additional signatures are
configured using the esign.signatures.per.page property in the esign.properties file.
The esign.properties file resides in the %DM_JMS_HOME%\shared\lib\configs.jar
directory.
Property Description
esign.server.language=en Indicates the locale of the Documentum
Server.
esign.server.country=US Indicates the country code.
esign.date.format = MM/dd/yyyy HH:mm:ss Indicates the date format on the signature
page. If the date format is not provided in the
esign.properties file, the date format of the
Documentum Server is used.
esign.signatures.per.page Indicates the number of signatures per
signature page.
12.12.2.2.2 Creating custom signature creation methods and associated signature page
templates
If you do not want to use the default functionality, you must write a signature
creation method. Using a signature page template is optional.
This section outlines the basic procedure for creating and installing a user-defined
signature creation method. Documentum provides standard support for the default
signature creation method and signature page template installed with the
Documentum Server software. For assistance in creating, implementing, or
debugging a user-defined signature creation method or signature page template,
contact OpenText Global Technical Services for service and support options to meet
your customization needs.
• The method should check the number of signatures currently defined for the
object to ensure that adding another signature does not exceed the maximum
number of signatures for the document.
• The method must return 1 if it completes successfully or a number greater than 1
if it fails. The method cannot return 0.
• If the trace parameter is passed to the method as T (TRUE), the method should
write trace information to standard output.
To use the custom method, applications must specify the name of the method object
that represents the custom method in the addESignature arguments. The following
table describes the parameters passed by the addESignature method:
Parameter Description
docbase Name of the repository.
user Name of the user who is signing.
doc_id Object ID of the document.
file_path The name and location of the content file.
signature_properties A set of property and value pairs that
contain data about the current and previous
signatures. The information can be used to
fill in a signature page. The set includes:
• sig.requester_0...<n>
• sig.no_0...<n>
• sig.user_id_0...<n>
• sig_user_name_0...<n>
• sig.reason_0...<n>
• sig.date_0...<n>
• sig.utc_date_0...<n>
If you decide to create a new signature template page, you can define the format,
content, and appearance of the page. You can store the template as primary content
of an object or as a rendition. For example, you can create an XML source file for
your template and generate an HTML rendition that is used by the custom signature
creation method. If you store the template as a rendition, set the template page
modifier to dm_sig_template. Setting the page modifier to dm_sig_template ensures
that the Rendition Manager administration tool does not remove the template
rendition.
Custom field names in custom signature templates must be added with a Ecustf_
prefix, using the following syntax:
'ECustF_Attribute_name1 = ''attr1_value'
You can trace the addESignature method and the default signature creation method
by setting the trace level for the DM_SYSOBJECT facility to 5 (or higher).
If you are using a custom signature creation method, trace messages generated by
the method written to standard out are recorded in the server log file if tracing is
enabled.
Note: When tracing is enabled, the addESignature method passes the trace
parameter set to T (TRUE) to the signature creation method.
To avoid performance issues while adding signatures to the document for files
greater than 30 MB, it is recommended to increase the JVM memory according to
availability. For, example, set the value as -Xms1024m -Xmx1024m in the JMS
startup file.
For example:
apply,c,NULL,REGENERATE_AUDIT_SIGNATURE,FROM_DATE,S,'03/20/2015',
TO_DATE,S,'04/03/2015',FORMAT,S,'mm/dd/yyyy',TRACE,B,T
apply,c,NULL,REGENERATE_AUDIT_SIGNATURE,FROM_DATE,S,'01/04/2014',
TO_DATE,S,'02/10/2014',FORMAT,S,'mm/dd/yyyy'
• Creating an alternate validation program that uses the user authentication name
and password to validate the user.
• Passing data other than a password through password argument of signoff and
creating a custom validation method to authenticate the user with that data.
For example, you can pass some biometric data and then validate that using a
custom validation program.
• Use an argument in a registration method to force Documentum Server to sign
the dm_signoff events or to add additional information to the entries.
Use the following procedure to create and implement a customized simple signoff:
Users can register the dm_signoff event so that when the signoff method is executed,
the server sends mail notifications to users. You do not need to register a separate
audit event, because the server always generates an audit trail for signoff executions.
To find everyone who signed off a specific object whose ID is xxx, use the following
DQL statement:
SELECT "user_name" FROM "dm_audittrail" WHERE
"audited_obj_id" = 'xxx' AND
"event_name" = 'dm_signoff'
12.12.3.1 AEK
Starting with the 7.2 release, the creation of AEK is performed during the creation of
the repository. Prior to 7.2, there was only one AEK in each Documentum Server
software installation. It will be created and installed during the Documentum Server
software installation procedure or when an existing repository was upgraded. The
AEK is used to encrypt:
On Windows:
%DOCUMENTUM%\dba\secure\aek.key
On Linux:
$DOCUMENTUM/dba/secure/aek.key
The file is a read only file. The name of the file depends on the keyname parameter.
Documentum Server and all server processes and jobs that require access to the AEK
use the following algorithm to find it:
Starting with the 7.2 release, the creation of AEK can be performed during the
creation of the repository. During repository creation or upgrade, option is provided
to create or upgrade key or to use existing key.
You can choose to maintain your existing key. If you need to use the AEK key before
creating the repository, create a key using the dm_crypto_create tool. Starting with
the 7.2 release, changing of the AEK is supported.
If there are multiple Documentum products installed on one host, the products can
use different AEKs or the same AEK.
On the same host, all keys have to use the same passphrase. If you want that
passphrase to be a custom passphrase, perform the following procedure after you
install the Documentum Server software:
If your repository is a distributed repository, all servers at all sites must be using the
same AEK. The installer handles the copying of AEK key.
In multiple-repository distributed sites, you should use the same AEK key.
It is strongly recommended that you back up the AEK file separately from the
repository and content files. Backing up the AEK and the data it encrypts in different
locations helps prevent a dictionary-based attack on the AEK.
Documentum Server uses the repository encryption key to create the signature on
audit trail entries. The server also uses the repository encryption key to encrypt file
store keys.
A repository created to use local key management stores the encrypted key in the
repository configuration object.
• dm_crypto_boot
Use this utility if you are protecting the AEK with a custom passphrase. Use it to:
It is not necessary to use this utility if you are protecting the AEK with the
default, Documentum passphrase. “Using dm_crypto_boot” on page 266
contains more details and instructions on using the utility.
• dm_crypto_create
Note: When you run the utilities from command prompt or other Windows
utility, make sure that the application is launched as administrator using Run
as Administrator. If it is not done, Windows operating system restricts the
access to the shared memory and other system resources.
You can also use this utility to clean up the shared memory region used to store the
AEK.
To run this utility, you must be a member of the dmadmin group and you must have
access to the AEK file.
the user is prompted for a passphrase, the passphrase is not displayed on screen
when it is typed in.
• -keyname <<keyname>>: This is the name the of the key. By default, the name is
CSaek.
You must include either the -location or -all argument. Use -location if only one
product is accessing the AEK. Using -location installs an obfuscated AEK in shared
memory. The -location argument identifies the location of the AEK key. Make sure
that you provide the complete path of the file along with the AEK file name when
using the –location argument. If you do not include the argument, the utility looks
for the location defined in DM_CRYPTO_FILE. If that environment variable is not
defined, the utility assumes the location %DOCUMENTUM%\dba\secure\aek.key
(Windows) or $DOCUMENTUM/dba/secure/aek.key (Linux).
Use the -all argument if there are multiple products on the host machine that use the
same passphrase for their AEKs and also to use AES_256_CBC algorithm with a
custom passphrase. If you include -all, the utility obfuscates the passphrase and
writes it into shared memory instead of the AEK. Each product can then obtain the
passphrase and use it to decrypt the AEK for their product.
To clean up the shared memory used to store the obfuscated AEK or passphrase,
execute the utility with the -remove argument. You must include the -passphrase
argument if you use -remove.
The -help argument returns help information for the utility. If you include -help, you
cannot include any other arguments.
Note: This utility does not work for OpenText Key Mediation Service (OT
KMS)-based AEK key. When you use this utility for OT KMS-based AEK key,
it results in the This dm_crypto_boot is meant for local AEK only **
Operation failed ** [DM_CRYPTO_E_INVALID_ARGUMENT]error: "Invalid
arguments. Check Usage" message.
• -keyname <<keyname>>: This is the name of the key. By default, the name is
CSaek.
• -location <<location>>: This is the location of the administration key file.
• -passphrase <<passphrase>>: This identifies the passphrase whose use you wish to
check. If you do not include the -passphrase argument, the utility assumes the
default passphrase but prompts you for confirmation. Consequently, if you are
checking the default passphrase and wish to avoid the confirmation request,
include the -noprompt argument. The -passphrase and -noprompt arguments are
mutually exclusive.
Note: If you are creating a custom passphrase for the AEK key, make sure
that you do not use:
Important
– If you specify the -remote flag, you must provide the values for <key_id>,
<kms_url>, <kms_api_key>.
– You must not specify both the <-passphrase> and <-remote> flags at the
same time. Only one is allowed in the syntax.
• -key_id <<key_id>>: This is the OT KMS master key ID generated during the OT
KMS configuration.
• -kms_url <<kms_url>>: This is the URL to connect to OT KMS Server. The format
is https://<IP address or host of OT KMS server>:port.
• -kms_api_key <<kms_api_key>>: This is the API key of OT KMS Server provided
during the OT KMS configuration.
• Starting with the 7.2 release, the utility is enhanced to provide the option for
specifying the algorithm used for generating encryption keys. The following
optional argument is introduced:
– -check: If the AEK file exists and decryption succeeds, the utility returns 0. If
the AEK file exists but decryption fails, the utility returns 1. If the AEK file is
already existing at the specified location, the utility returns 2. If the AEK file
does not exist at the specified location, the utility returns 3.
– -algorithm <<algorithm>>: This option allows to specify the algorithm that has
to be used for generating the AEK key. By default, AES_128_CBC is used.
Valid values are:
○ AES_128_CBC
○ AES_192_CBC
○ AES_256_CBC
– -help: This returns the help information for the utility. If you include -help,
including any other arguments is not possible.
File Description
dbpasswd.txt This file contains one line with the database
password used by Documentum Server to
connect to the RDBMS. (This is password for
the repository owner.)
<docbase_name>.cnt The file contains one line with the password
used by an object replication job and the
distributed operations to connect to the
repository as a source or target repository.
File Description
federation.cnt Contains the information, including
passwords, used by a governing repository
server to connect to member repositories.
The file is stored with the governing
repository.
Use encryptPassword to encrypt any password that you want to pass in encrypted
form to the one of the following methods:
• IDfSession.assume
• Authenticate in IDfSessionManager, IDfSession, or IDfClient
• A method to obtain a new or shared session.
• IDfPersistentObject.signoff
• IDfSession.changePpassword
If you do not want to use an encrypted password for a particular operation, use a
text editor to edit the appropriate file listed in “Password files encrypted by default”
on page 269. Remove the encrypted password and replace it with the clear text
password.
To create or change an encrypted password in a file that you have created, use the
following syntax:
For example, executing the utility with the following command line replaces the
database password used by the Documentum Server in the engineering repository to
connect with the RDBMS:
dm_encrypt_password keyname CSaek -docbase engineering -passphrase jdoe -rdbms
-encrypt 2003password
The AEK location is not identified, so the utility reads the location from the
DM_CRYPTO_FILE environment variable. The passphrase jdoe is used to decrypt
the AEK. The utility encrypts the password 2003password and replaces the current
RDBMS password in dbpasswd.txt with the newly encrypted password.
The AEK location is not identified, so the utility reads the location from the
DM_CRYPTO_FILE environment variable. The password jdoe is used to decrypt the
AEK. The utility encrypts the password devpass and writes the encrypted value to
the file C:\engineering\specification.enc.
Note: This utility must be used only for the password-based AEK key.
The -location argument identifies the location of the AEK whose passphrase you
wish to change. If you do not include the argument, the utility looks for location
defined in DM_CRYPTO_FILE. If that environment variable is not defined, the
utility assumes the location %DOCUMENTUM%\dba\secure\aek.key (Windows) or
$DOCUMENTUM/dba/secure/aek.key (Linux).
The -passphrase argument identifies the current passphrase associated with the
AEK. The -newpassphrase argument defines the new passphrase you wish to use to
encrypt the AEK. Both phrases interact in a similar manner with the -noprompt
argument. The behavior is described in “Interaction of
dm_crypto_change_passphrase arguments” on page 274.
You must include a value for at least one of the passphrases. You cannot be
prompted for both values or allow both values to default.
To illustrate the use of this utility, here are some examples. The first example
changes the passphrase for the Documentum Server host AEK from the
Documentum default to “custom_pass1”. The -passphrase argument is not included
and the-noprompt is included, so the utility assumes that the current passphrase is
the default passphrase.
dm_crypto_change_passphrase -location %DOCUMENTUM%\dba\secure\aek.key
-newpassphrase custom_pass1 -noprompt
This next example changes the passphrase from one custom passphrase to another:
dm_crypto_change_passphrase -location %DOCUMENTUM%\dba\secure\aek.key
-passphrase genuine -newpassphrase glorious
This final example changes the passphrase from a custom passphrase to the default:
dm_crypto_change_passphrase -location %DOCUMENTUM%\dba\secure\aek.key
-passphrase custom_pass1 -noprompt
Because the new passphrase is set to the Documentum default passphrase, it is not
necessary to include the -newpassphrase argument. The utility assumes that the new
passphrase is the default if the argument is not present and the -noprompt argument
is present.
The -help argument returns help information for the utility. If you include -help, you
cannot include any other arguments.
A repository created to use local key management stores the encrypted login ticket
key in the repository. There is no difference using the EXPORT_TICKET_KEY,
IMPORT_TICKET_KEY, and RESET_TICKET_KEY methods for either type of
repository.
You must restart Documentum Server after importing a login ticket key to make the
new login ticket key take effect.
To reset a login ticket key, execute the RESET_TICKET_KEY method. You can also
use the DFC method, resetTicketKey, in the IDfSession interface.
You must restart Documentum Server after resetting a login ticket key.
AAC tokens are enabled at the server level, to give you flexibility in designing your
security. For example, you can set up a server that requires a token to service users
outside a firewall and another server that does not require a token for users inside
the firewall.
Tokens are used in addition to user names and passwords (or login tickets) when
issuing a connection request. They are not a substitute for either. If a server requires
a token and the user making the connection request is not a Superuser, the
connection request must be accompanied by a token. Only Superusers can connect to
a repository through a server requiring a token without a token.
An application access control token can be created to be valid only when sent from a
particular host machine. This token authentication mechanism is dependent on
knowledge of the machine ID of host. If you are using such tokens, you must set the
dfc.machine.id key in the dfc.properties file used by client applications on that host
so that DFC can include that when sending the application token to Documentum
Server. Set the key to the machine ID of host.
This feature helps ensure that the appropriate token is provided when an
application is run from different machines and that older applications can connect to
a server that requires an AAC token for connection.
The dfc.tokenstorage.enable key is a Boolean key that controls whether the client
library can retrieve tokens from storage. If it is set to true, the client library will
attempt to retrieve a token from storage for use when a connect request without a
token is issued to a server requiring a token. If the key is set to false, the client
library does not retrieve tokens from storage.
The dfc.tokenstorage.dir key tells the client library where to find the token files
generated by the dmtkgen utility. Any tokens that you generate using dmtkgen
must be placed in this location. If they are not, the client library will not find them.
When you install DFC on a client host machine for the first time, the installation sets
the dfc.tokenstorage.enable key to false and the dfc.tokenstorage.dir to <user-selected
drive>:\Documentum\apptoken. If you upgrade or reinstall DFC, the procedure will
not reset those keys if you have changed them.
When you install Documentum Server for the first time on a host machine, the
process also installs DFC. The DFC installation process sets the
dfc.tokenstorage.enable key in the dfc.properties file on the server host to false and
the dfc.tokenstorage.dir to %Documentum%\apptoken ($DOCUMENTUM/
apptoken). However, when the Documentum Server installer runs, it resets the
tdfc.tokenstorage.enable key to true. The dfc.tokenstorage.dir is not reset.
The dfc.properties file on the server host is used by the internal methods to connect
to repositories. The dfc.tokenstorage.enable and dfc.tokenstorage.dir settings in this
dfc.properties file affect methods associated with dm_method objects. Replication
and federation methods are not affected by the settings in this dfc.properties file
because these jobs execute as a Superuser.
“dmtkgen utility arguments” on page 278, lists the arguments accepted by this
utility.
Argument Description
-u username User as whom the utility is running
Argument Description
-s scope Scope of the generated token. Valid values
are
• docbase
Specifying docbase restricts the use of
token to the repository identified in the
output name of the file.
• global
Specifying global allows the token to be
used to connect to any repository having
the same login ticket key as the repository
in which the token was generated.
Argument Description
-o output_file The name of the generated XML token file.
You can speicify a full file path or only the
file name.
or
repository_name.tkn
Token files are retrieved by repository name. If the client library is looking for a
token in storage for a particular connection request, it searches for a token whose
name is the name of the requested repository in the connection request. If the library
cannot find a token whose name matches the repository name in the connection
request, it searches for a token file called default.tkn. Because of this token file search
algorithm, if you create a global token file for use across multiple repositories, you
must name the file default.tkn.
The dfc.tokenstorage.dir key is set automatically when the DFC is installed. For
more information about its setting, refer to “Enabling token retrieval by the client
library” on page 276.
Note: In addition, the DFS (client and server) and Documentum Server must
be in the same domain.
This figure describes how Kerberos authentication is used to verify the credentials of
Documentum clients:
Note: It is mandatory that the SPN of all services are registered in the
KDC. The KDC can provide the Service Ticket only for a registered service.
4. The KDC Ticket Granting Service (TGS) authenticates the TGT and generates an
ST.
5. The client running the Documentum application uses the relevant DFC-based
API and provides the username and ST as password.
6. The Documentum Foundation Classes (DFC) pass the login information to the
Documentum Server service.
7. The Documentum Server service validates the ST and authenticates the user.
8. If authentication is enabled, the Documentum Server service sends the
acknowledged ticket to DFC.
9. DFC sends the acknowledged ticket back to the client to validate the
Documentum Server service. A session is established and no further
authentication is required.
– DES_CBC_CRC
– DES_CBC_MD5
– RC4_HMAC_MD5
– AES128_HMAC_SHA1
– AES256_HMAC_SHA1
1. Create one or more Kerberos realms and add all servers and clients that are
participating in Kerberos authentication to those Kerberos realms.
See “Configuring krb5.ini file” on page 284.
Kerberos SSO can only authenticate users who are registered as Kerberos users in
the Active Directory and in a repository.
To create Kerberos users for a repository, perform one of the following tasks:
• Create the user account manually in the Active Directory and repository.
• Synchronize the users from the Active Directory using LDAP synchronization,
then modify the LDAP configuration object.
Only the installation owner, system administrator, or superuser can create users in a
repository. If an LDAP server authenticates a user, only a superuser can modify the
user’s LDAP-mapped attributes.
There are two options for creating Kerberos users in the repository.
• Create the user manually, both in Active Directory and in the repository. For
each user in the repository, set the User Source property to dm_krb or leave it
blank. Setting the User Source property to dm_krb is optional.
• Synchronize the users from Active Directory using LDAP synchronization, then
modify the User Login Domain in the LDAP configuration object, as described
in “Configuring LDAP synchronization for Kerberos users” on page 165.
[libdefaults]
default_realm = <REALM>
forwardable = true
ticket_lifetime = 24h
clockskew = 72000
default_tkt_enctypes = <ticket encryption standards>
default_tgs_enctypes = <ticket granting server encryption standards>
[realms]
<REALM> = {
kdc = <kdc_server_ip>
admin_server = <admin_server_ip>
}
[domain_realm]
<domain> = <REALM>
[logging]
default = c:\kdc.log
kdc = c:\kdc.log
[appdefaults]
autologin = true
forward = true
forwardable = true
encrypt = true
Note: The length of the *.keytab filename is limited by the operating system.
where:
• <repo_name> is the name of the repository for which the SPN of repository is
registered.
• <xxxx> is a string that makes the file name unique.
You must configure a unique SPN for each repository. It is recommended to use the
following SPN syntax for registering a repository on the KDC service:
For example:
CS/[email protected]
where:
In the procedure discussed in this section, a user is registered on the KDC service to
act as the service principal for the repository service. This user maps to the SPN of
repository itself yet is distinct from users who need to be authenticated to use the
service.
There are two recommended procedures for registering repository SPNs. The
simplest approach is to create SPN of a single repository mapped to a single Active
Directory user, which enables you to set a single password for the SPN of repository.
An alternate approach is to register multiple repository SPNs mapped to a the same,
single Active Directory user.
Notes
3. To map the SPN of repository to the user name specified in Step 2 using a one-
to-one mapping, perform the following steps:
a. Execute the ktpass utility with the user specified in Step 2 as follows:
For example:
ktpass /pass testpass
-out c:\Nag\REPO66.2008.keytab
-princ CS/[email protected]
/kvno 7 -crypto RC4-HMAC-NT +DumpSalt
-ptype KRB5_NT_SRV_HST /mapOp set /mapUser rcfkrbuser33
b. To restrict Kerberos delegation privileges for the new user, execute the
setspn command using the following syntax:
setspn -A CS/<reponame>@domain.com domain.com\<documentum_server_username>
On User Properties > Delegation of new user, select the Do not trust this
user for delegation checkbox.
c. Copy the *.keytab file to $DOCUMENTUM/dba/auth/Kerberos on the
Documentum Server.
4. To map multiple repository SPNs to the same user name using many-to-one
mapping, perform the following steps:
a. Set the SPN of a repository and create the *.keytab file using ktpass
utility with the user specified in Step 2.
Remember the salt string and the key version number (vno) because you
need to use them in Step 4.c.
For example:
ktpass /pass testpass
-out c:\Nag\REPO66.2008.keytab
-princ CS/[email protected]
/kvno 7 -crypto RC4-HMAC-NT +DumpSalt
-ptype KRB5_NT_SRV_HST /mapOp set /mapUser rcfkrbuser33
b. Set the SPN of another repository using the setspn utility to the same user
(<documentum_server_username> in the following syntax) created in Step
2. For example:
setspn -A CS/<reponame>@domain.com domain.com\<documentum_server_username>
On User Properties > Delegation of new user, select the Trust this user for
delegation to any service (Kerberos only) checkbox.
c. Execute the ktpass utility for the SPN of second repository without setting
the user specified in Step 2.
Use the salt and key version number (kvno) that were created in Step 4.a.
For example:
ktpass /pass testpass
-out c:\Nag\REPO66.2008_2.keytab
-princ CS/[email protected]
d. Repeat Step 4.b and Step 4.c for SPN of each additional repository.
e. Copy the *.keytab file to $DOCUMENTUM/dba/auth/Kerberos on the
Documentum Server host.
[realms]
<REALM> = {
kdc = <kdc_server_ip>
admin_server = <admin_server_ip>
}
[domain_realm]
<domain> = <REALM>
[logging]
default = c:\kdc.log
kdc = c:\kdc.log
[appdefaults]
autologin = true
forward = true
forwardable = true
encrypt = true
The SAML 2.0 is an XML-based framework that allows identity and security
information to be shared across security domains. The SAML specification, while
primarily is targeted at providing cross-domain web browser Single sign-on, is also
designed to be modular and extensible to facilitate use in other SSO contexts.
– <stackTrace>: Set the value to true if stackTrace is required in log file in case of
failure. The default value is false.
– <certPath>: Path where token signing certificate from SAML server has to be
placed in Documentum Server. Default path for Windows is C:/SAML/cert.
For Linux, set the appropriate certificate path.
– <responseSkew>: Maximum time difference allowed between Documentum
Server and Active Directory machine in seconds. The default value is 60
seconds.
Sample web applications can be provided separately if you want to test the basic
SAML functionalities. You must contact Engineering for guidance on deploying and
testing the application.
The following figure describes how SAML authentication is used to verify the
credentials of Documentum clients:
3. Select the LDAP and add the two attributes for the following outgoing claim
types: Windows account name and Name ID
2. Import the public key certificate using the options in Microsoft AD FS. Microsoft
documentation contains the information.
3. Use the following command to convert private key to the PKCS8 DER format:
openssl pkcs8 -topk8 -inform PEM -outform DER -nocrypt
-in adfs.key -out adfs.pk8
4. Copy the PKCS8 file from Step 3 in the location configured in <certPath> under
saml.properties.
12.14.5 Troubleshooting
Tracing can be done at two levels:
• Documentum Server:
apply,c,NULL,SET_OPTIONS,OPTION,S,trace_http_post,VALUE,B,T
apply,c,NULL,SET_OPTIONS,OPTION,S,rpctrace,VALUE,B,T
apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,T
Documentum Server creates an inbox item with details such as name of the event,
time of occurrence, user, message, and so on that you can view in Inbox of
Documentum Administrator.
During the Documentum Server installation, a set of default critical events are
registered automatically with an empty dm_critical_event_receiver_role role.
The following events are added to the list of critical events at the time of
Documentum Server installation:
The OpenText Documentum Server System Object Reference Guide contains information
about the privileges and the values added for the user_xprivileges user type
property.
13.1 Introduction
Documentum supports tracing and logging facilities for both Documentum Server
and DFC operations. The logging facilities record information generated
automatically by Documentum Server or DFC, including error, warning, and
informational messages. Logging errors, warnings, and informational messages
occurs automatically.
Some features, such as jobs, record tracing and logging information in log files
specific to those features. The sections that describes those features contain more
details about the associated log files.
The following table describes the trace flags for the -o option at server start-up:
To stop tracing from the command line, you must remove the trace flag from the
server command line and then stop and restart the server.
The severity levels are additive. For example, if you specify tracing level 10, you also
receive all the other messages specified by the lower levels in addition to the timing
information.
Note: The severity levels are also applicable for -Ftrace_level and -Ltrace_level
arguments.
The following table describes the facilities for the setServerTraceLevel method:
Facility Description
ALL Traces all available trace facilities.
DM_CONTENT Traces content operations. The information is
recorded in the session log file.
DM_DUMP Traces dump operations. The information is
recorded in the session log file.
DM_LOAD Traces load operations. The information is
recorded in the session log file.
Facility Description
DM_QUERY Traces query operations. The information is
recorded in the session log file.
SQL_TRACE Traces generated SQL statements. The
information is recorded in the session log
file.
The method can be executed from Documentum Administrator or using the DQL
EXECUTE statement or an IDfSession.apply method.
Turning tracing on and off using SET_OPTIONS affects only new sessions. Current
sessions are not affected.
The following example describes a log file section for query tracing. The example
assumes that the following query is issued:
SELECT "user_os_name" FROM "dm_user" WHERE "user_name"='zhan'
All DFC tracing is configured in the dfc.properties file. The entries are dynamic.
Adding, removing, or changing a tracing key entry is effective immediately without
having to stop and restart DFC or the application.
Where file_prefix is a string that is prepended to the beginning of the name of each
trace file generated by DFC and timestamp records when the file was created. An
identifier is included in the file name only if you are generating log files for
individual users or threads. The identifier is the user name, if the logs are generated
for individual users or the thread name, if logs are generated for individual threads.
Enabling tracing creates a logger and a logging appender that record the entries in
the trace log file. The logging appender determines the name, location, and format of
the trace file. “Logging appender options” on page 298 contains more information.
If no unit of measure is
specified, the integer value is
interpreted as bytes.
Value Description
standard All log entries are recorded in one log file.
This is the default.
Value Description
thread DFC creates a log file for each thread in the
following format:
<file_prefix>.<threadname>.<timestamp>.log
By default, timestamps are expressed in seconds. You can configure the timestamp
display using the dfc.tracing.timing_style key. The following table describes the
valid values for dfc.tracing.timing_style:
Value Description
nanoseconds The timestamp value is expressed in
nanoseconds.
Value Description
milliseconds_from_start The timestamp is recorded as the number of
milliseconds from the start of the JVM
process.
seconds The timestamp value is displayed as the
number of seconds from the start of the
process. The value is a float.
• dfc.tracing.timing_style
If the key value is date, the date format and column width can be specified in the
dfc.tracing.date_format and dfc.tracing.date_column_width keys.
• dfc.tracing.date_format
Specifies a date format that is different from that default format. The specified
format specify must conform to the syntax supported by the Java class
java.text.SimpleDateFormat.
• dfc.tracing.date_column_width
This key specifies the column width as a positive integer value. The default
column width is 14. If the format specified in dfc.tracing.date_format is wider
than 14 characters, modify the column width to the required number of
characters.
• dfc.tracing.max_stack_depth
This key controls the depth of tracing into the DFC call stack. The default value
for this key is 1, meaning only the first method call in a DFC stack is traced. A
value of -1 means that all levels of method calls are traced.
• dfc.tracing.mode
This key controls how the method entry and exit is recorded in the log file. Valid
values are:
– compact: DFC adds one line to the file that records both entry and exit and
return value for a method. The methods are logged in the order of method
entrance. DFC buffers the log entries for a sequence of method calls until the
topmost method returns.
– standard: DFC creates one line in the file for a method entry and one line for a
method exit. The log entries for exit record the return values of the methods.
Where <value> is one or more string expressions that identify what to trace. The
syntax of the expression is:
([<qualified_classname_segment>][*]|*)[.[<method_name_segment>][*]()]
For example:
dfc.tracing.user_name_filter[0]=<expression>
dfc.tracing.user_name_filter[1]=<expression>
dfc.tracing.user_name_filter[2]=<expression>
The expression is a regular expression, using the syntax for a regular expression as
defined in the Java class java.util.regex.Pattern. DFC traces activities of the users
whose login names (dm_user.user_login_name) match the specified expression. The
log entries contain the user name. By default, dfc.tracing.user_name_filter key
empty, which means that all users are traced.
The tracing output does not necessarily include all DFC calls made by a particular
user. For some calls, it is not possible to identify the user. For example, most
methods on the DfClient interface are unrelated to a specific session or session
manager. Consequently, when tracing is constrained by user, these method calls are
not traced.
When tracing by user, DFC cannot identify the user until one of the following
occurs:
• A method call on an object that derives from IDfTypedObject (if the session
associated with that object is not an API session).
• A method call that takes an IDfSession or IDfSessionManager.
• Amethod call that returns an IDfSession or IDfSessionManager.
• An IDfSessionManager method call that sets or gets an IDfLoginInfo object.
For example:
dfc.tracing.thread_name_filter[0]=<expression>
dfc.tracing.thread_name_filter[1]=<expression>
dfc.tracing.thread_name_filter[2]=<expression>
The key is not set by default, which means that all methods in all traced threads are
traced by default. Setting the key is primarily useful for standalone DFC-based
applications that may spawn DFC worker threads.
Note: You can use dfc.tracing.file_creation_mode to further sort the entries for
each thread into separate files. “Defining file creation mode” on page 299
contains more information.
• dfc.tracing.include_rpc_count
This Boolean key instructs DFC to add an additional column to each log file entry
that records the current RPC call count for the associated session. The logged
value is N/A if DFC cannot determine the session. If the mode is set to compact,
the logged RPC count is the count at the time the method exits. The default value
for the key is false.
• dfc.tracing.include_rpcs
This Boolean key instructs DFC to include a separate line in the trace file for each
executed RPC call. The default value for this key is false.
If the key is enabled, the tracing facility logs the entire stack trace for the exception.
The stack trace is recorded immediately following the line that logs the exit of the
method that caused the exception. The trace is indented and prefixed with
exclamation marks.
• dfc.tracing.log.category
This key specifies the category that are traced. The key is a repeating key that can
have multiple entries in the dfc.properties file. Each entry has one value that
specifies a category. Each dfc.tracing.log.category entry is referenced using an
integer in brackets, with the first entry beginning at zero. For example:
dfc.tracing.log.category[0]=<class_or_package_name>
dfc.tracing.log.category[1]=<class_or_package_name>
• dfc.tracing.log.level
This key specifies the level of tracing. The dfc.tracing.log.level key defaults to
DEBUG if not specified.
• dfc.tracing.log.additivity
This key is the additivity setting for the category. The dfc.tracing.log.additivity
key defaults to false if not specified.
Component Description
timestamp The timestamp of the entry. The format
dependents on the tracing configuration, as
defined in the dfc.tracing.timing_style key.
method_duration The total length of time to execute the
method. This value is only included if the
dfc.tracing.mode key value is compact.
threadname Name of the thread associated with the
entry.
username Name of the user associated with the entry.
sessionID Identifies the session associated with the
entry. This entry is only included if the
dfc.tracing.include_session_id key enabled
(true).
session_mgr_id The hash code identity of the session
manager associated with the entry. This
entry is only included if the
dfc.tracing.include_session_id key is enabled
(true).
RPCs=<count> Total number of RPC calls at the time the
entry was logged. This is only included if
dfc.tracing.include_rpc_count is set to true.
Component Description
entry_exit_designation Keyword that identifies source of the log
entry. The keyword is only included if the
dfc.tracing.mode key value is standard.
Valid values are:
• ENTER: The entry records a method
entry.
• EXIT: The entry records a method exit.
• !EXC!: The entry records a call that results
in an exception.
• RPC_ENTER: The entry records an RPC
call entry.
• RPC_EXIT: The entry records an RPC call
exit.
stack_depth_indicator Indicates the depth of call stack tracing. The
stack depth indicator is a series of dots (for
example, ...).
qualified_class_name Fully qualified name of the class being
traced.
object_identity_hash_code Hash code of the object on which the method
is being called. If the method is static, this
element does not appear in the log entry.
method Name of the traced method.
method_arguments Arguments to the specified method.
return_value The value returned by the method. This
entry is included if the dfc.tracing.mode
value is compactor if the entry records the
method exit.
exception The exception name and message, if any, of
the exception thrown by the method.
The values in each column in an entry are separated by spaces, not tabs.
14.1 Auditing
Auditing is a security feature for monitoring events that occur in a repository or
application. Auditing an event creates an audit trail, a history in the repository of the
occurrence of the event. Audit information can be used to:
• Record end user tracking information that includes IP address and geographical
location of end user, Client host, Client ID, and Client name for contextual
awareness and security monitoring.
Note: Make sure that the dm_connect and dm_logon_failure events are
registered for auditing.
Auditing is managed on the Audit Management page under the Administration >
Audit Management node.
• System events
System events are events that Documentum Server recognizes and can audit
automatically. For example, checking in a document can be an audited system
event.
Documentum Server provides automatic auditing for any system event. When an
audited system event occurs, Documentum Server automatically generates the
audit trail entry. Documentum provides a large set of system events that are
associated with API methods, lifecycles (business policies), workflows, and jobs.
For a complete list of auditable system events, see Appendix A, System events
on page 627.
• Application events
Application events are events that are recognized and audited by client
applications. For example, a user opening a particular dialog can be an audited
application event. To audit application events, an application must recognize
when the event occurs and create the audit trail entry programmatically.
Application events are not recognized by Documentum Server.
Audit trail entries are generated after auditing is set up and can consume
considerable space in the repository. Therefore audit trail entries should be removed
from the repository periodically.
Aspect attributes can also be audited. Aspect attribute audits record the aspect
attribute changes attached to an object along with the object type attributes.
Auditing is available for aspect attributes attached to SysObject, user, group, and acl
objects, and any of their associated subtypes. Auditing aspect attributes requires that
auditing is also enabled for the object type or subtype.
Note: For repositories created on RDBMS platforms with a page size of less
than 8K, the audit_old_values property is always disabled and cannot be
enabled.
• TRANSACTION_TRACKING_FEATURE
When you enable the TRANSACTION_TRACKING_FEATURE module, the
events described in the following table are audited:
Table 14-2: Events audited by the TRANSACTION_TRACKING_FEATURE
module
dm_user – dm_save
– dm_destroy
– dm_connect
– dm_disconnect
dm_group – dm_save
– dm_destroy
• dm_occasional_user_role
All the occasional users that are members of the dm_occasional_user_role,
dm_connect, and dm_disconnect events are audited.
You can set audits only for one object type at a time. Complete these instructions for
each object type you audit.
Field Description
Application Code Enter the application code to audit only
objects with a particular application code.
Field Description
Add Click to access the Choose an event page and
select the events you want to register.
Aspect attributes can be audited if they are attached to SysObject, user, group, and
acl objects, and any of their associated subtypes. Auditing aspect attributes requires
that the related aspect type is registered for auditing.
Field Description
Attributes Records the values of particular properties of
the object in the audit trail. Click Select
Attributes to access the Choose an attribute
page.
Field Description
Add Click to access the Choose an event page and
select the events you want to register.
Field Description
Search By Indicates whether to use the criteria specified
on the search page or a DQL query to search
for the audit trail.
Field Description
Object Name Restricts the search by object names. Select
Begins With, Contains, or Ends With and
type in a string.
Versions Restricts the search by version. Type in a
version.
Look In Restricts the search to a particular folder.
Click Select Folder and select the folder.
Audit Dates Restricts the search by time. Click Local
Time or UTC, then type or select a beginning
date in the From field and an ending date for
the search in the Through field.
Type Restricts the search to a particular type. Click
Select Type and select the type. To include
subtypes of the type, click Include Subtype.
Lifecycle Restricts the search to objects attached to a
lifecycle. Click Select Lifecycle and select a
lifecycle.
Application Code Restricts the search to objects with an
application code. Type the application code.
To restrict the search to those audit trails that
are signed, select Has Signature.
Controlling Application Restricts the search to objects with a
controlling application. Type the name of the
application.
Audit policies specify which user, group, or role can purge audit trails. You must be
an Install Owner to access and manage audit policies. Other users can only view the
list of audit policies.
Audit policies are managed on the Audit Policies page. Select Administration >
Audit Management to display the Audit Management page, then click the Audit
Policies link to display the Audit Policies page. “Audit policies page information”
on page 318 describes the information on the Audit Policies page.
Field Description
Name The name of the audit policy.
Accessor Name The user, group, or role to which this audit
policy is assigned.
Audit Policy Rules Specifies the policy rules, as follows:
• Click Add to add a rule. The Create/Edit
Rule page displays. Select an attribute
and enter a value for the attribute.
• Select an attribute name, then click Edit to
modify the rule. The Create/Edit Rule
page displays. Modify the attribute.
• Select an attribute name, then click
Remove to delete the rule. There must be
at least one rule or condition to save the
audit policy.
Field Description
Name Test
Accessor Name user1
Audit Policy Rules
Attribute Name Attribute Value
object_type type1
is_archived T
The policy described in this example only protects audit records that satisfy the
policy. For example, the policy does not protect audit records that have the
is_archived attribute set to F. Any user with purge audit extended privilege can
delete those records.
Select the properties as described in “Object and object instance auditing properties”
on page 320 to define the audited properties and register the object or object instance
for auditing.
• Application Code
• Lifecycle
• State
• Has signature manifested
• Authentication Required
Field Description
Attributes Records the values of particular properties of
the object in the audit trail. Click Select
Attributes to access the Choose an attribute
page.
The properties that have a common purpose for entries generated by Documentum
Server include all the properties other than the string_<x> and id_<x> properties. An
audit trail object also has five ID-datatyped properties and five string properties.
These properties are named generically, id_1 to id_5 and string_1 to string_5. In
audit trail entries for workflows and lifecycle events, Documentum Server uses these
properties to store information specific to the particular kinds of events. Some of the
events generated by methods other than workflow or lifecycle-related methods use
these generic properties and some do not. A user-defined event can use these
properties to store any information wanted.
The properties defined for the dm_audittrail_acl object type store information about
an audited ACL. The properties identify the event (dm_save, dm_saveasnew, or
dm_destroy), the target ACL, the ACL entries or changes to the entries. For example,
suppose you create an ACL with the following property values:
r_object_id:451e9a8b0001900
r_accessor_name [0]:dm_world
[1]:dm_owner
[2]:John Doe
r_accessor_permit [0]:6
[1]:7
[2]:3
r_accessor_xpermit [0]:0
[1]:0
[2]:196608
r_is_group [0]:F
[1]:F
[2]:F
The audit trail acl object created in response has the following property values:
r_object_id:<audit trail acl obj ID>
audited_obj_id :451e9a8b0001900
event_name :dm_save
string_1 :Create
accessor_operation[0] :I
[1] :I
[2] :I
accessor_name [0] :dm_world
[1] :dm_owner
[2] :John Doe
accessor_permit [0] :6
[1] :7
[2] :3
accessor_xpermit [0] :0
[1] :0
[2] :196608
application_permit [0]:
[1]:
[2]:
permit_type [0]:0
[1]:0
[2]:0
is_group [0] :F
[1] :F
[2] :F
The event_name records the repository event, a save method, that caused the
creation of the audit trail entry. The string_1 property records the event desciption,
in this case, Create, indicating the creation of an ACL. The accessor_operation
property describes what operation was performed on each accessor identified at the
corresponding index position in accessor_name. The accessor_permit and
accessor_xpermit properties record the permissions assigned to the user (or group)
identified in the corresponding positions in accessor_name. Finally, the is_group
property identifies whether the value in accessor_name at the corresponding
position represents an individual user or a group.
Now suppose you change the ACL. The changes you make are:
The resulting audit trail acl object has the following properties:
r_object_id :<audit trail acl obj ID>
audited_obj_id :451e9a8b0001900
event_name :dm_save
string_1 :Save
accessor_operation [0] :U
[1] :I
[2] :D
accessor_name [0] :dm_world
[1] :Sales
[2] :JohnDoe
accessor_permit [0] :0
[1] :2
[2] :3
accessor_xpermit [0] :0
[1] :0
[2] :196608
application_permit [0]:
[1]:
[2]:
permit_type [0]:0
[1]:0
[2]:0
is_group [0] :F
[1] :T
[2] :F
Audit trail group objects are interpreted similarly. The values in the corresponding
index positions in users_names and users_names_operations represent one
individual user who is a member of the audited group. The values in the
corresponding positions in groups_names and groups_names_operation represent
one group that is a member of the audited group. The operations property defines
what operation was performed on the member at the corresponding names
property.
15.1 Methods
Methods are executable programs that are represented by method objects in the
repository. The program can be a Docbasic script, a Java method, or a program
written in another programming language such as C++. The associated method
object has properties that identify the executable and define command line
arguments, and the execution parameters.
The executable invoked by the method can be stored in the file system or as content
of the method object. If the method is executed by the Java method server, the entire
custom or xml app JAR must be stored in the $DM_JMS_HOME/webapps/DmMethods/
WEB-INF/lib directory. For workflow methods, the .jar files must be placed under
the Process Engine deployment in the $DM_JMS_HOME/webapps/bpm/WEB-INF/lib
directory.
If the program is a Java method and you want to execute it using the Java method
server, install the method on host file system of the application server. Only the
application server instance installed during Documentum Server installation can
execute Java methods. Do not store the program in the repository as the content of
the method object.
Field Description
Name The method name.
Field Description
Method Success Status Specifies the valid value for current status in
the completed job. If this option is selected,
the current status property value of the job
must match the success status value after the
job completes. The property is ignored if the
option is not selected.
Timeout Minimum The minimum timeout that can be specified
on the command line for this procedure. The
minimum timeout value cannot be greater
than the default value specified in the
timeout default field.
Timeout Default The default timeout value for the procedure.
The system uses the default timeout value if
no other time-out is specified on the
command line.
Field Description
Run As Owner Specifies whether to run method to run as
the installation owner account, with the
privileges of the installation owner. If this
option is not selected, the method runs with
the privileges of the method user.
Field Description
Title A descriptive title for the method.
Subject A subject associated with the method.
Keywords One or more keywords that describe the
method. Click Edit to add keywords.
Authors One or more method authors. Click Edit to
add authors.
Owner Name The name of the method owner. Click Edit to
select a different owner.
Version Label The current version label of the method.
Click Edit to change the version label.
Checkout Date The date when the method was checked out
last.
Checked Out by The name of the user who checked out the
method.
Field Description
Created The date and time when the method was
created.
Creator Name The name of the user who created the
method.
Modified The date and time when the method was last
modified.
Modified By The name of the user who last modified the
method.
Accessed The time and date when the method was last
accessed.
If you run a default Documentum method from the Run Method page, select Run as
server unless you are logged in as the installation owner.
After you run the method, the following method results appear:
Click OK to exit the results page and return to the Methods list page.
The following sections provide more information about full-text indexing methods:
The following section provides more information about the workflow method:
15.3.2.1 CAN_FETCH
Any user can run the CAN_FETCH administration method to determine whether
the server can fetch a specified content file.
15.3.2.2 CLEAN_LINKS
The CLEAN_LINKS administration method removes linked_store links not
associated with sessions, unnecessary dmi_linkrecord objects, and auxiliary
directories.
15.3.2.3 DELETE_REPLICA
The DELETE_REPLICA administration method removes a content file from a
component area of a distributed storage area.
15.3.2.4 DESTROY_CONTENT
The DESTROY_CONTENT method removes content objects from the repository and
their associated content files from storage areas.
15.3.2.5 EXPORT_TICKET_KEY
The EXPORT_TICKET_KEY administration method encrypts and exports a login
ticket from the repository to a client machine.
15.3.2.6 GET_PATH
The GET_PATH administration method returns the directory location of a content
file stored in a distributed storage area.
15.3.2.7 IMPORT_REPLICA
The IMPORT_REPLICA administration method imports files from one distributed
storage area into another distributed storage area.
15.3.2.8 IMPORT_TICKET_KEY
The IMPORT_TICKET_KEY administration method decrypts a login ticket from a
client machine and imports the ticket into the repository.
15.3.2.9 MIGRATE_CONTENT
The MIGRATE_CONTENT administration method migrates content files from one
storage area to another.
• Single SysObject.
• Sets of content objects qualified by a DQL predicate against dmr_content.
• Make sure that all objects to be migrated are checked in to the repository. If you
migrate any checked-out objects, check-in fails because of mismatched versions.
• Make sure that the file store to which you migrate objects has sufficient disk
space for the migration.
Note: For a read-only filestore, when you use MIGRATE_CONTENT with the
parameter remove_original set to <TRUE>, the migration fails because the
method is unable to remove the content from the SOURCE filestore. You must
make the storage area online for a successful migration.
Field Description
Migrate Select A single object from the drop-down
list.
Content Click Select Object, then select an object type
from the Select From drop-down list.
Field Description
Command File Name A string that specifies a file path to a log file.
Field Description
Migrate Select All content in a filestore from the
drop-down list.
Source Select a source file store from the Source
drop-down list.
Field Description
Source Direct Access Specifies whether the content files in the
source store can be directly accessed through
a full file path.
Field Description
Migrate Select All content satisfying a query from
the drop-down list.
Select Object Type to Migrate Specify the object type to migrate.
Field Description
Path Click Select Path and select a location on the
server file system for the log file path
Target Select a target file store from the drop-down
list.
Maximum Specifies the maximum number of objects to
migrate.
Field Description
Update Only This option is used only in conjunction with
the Direct Copy or Direct Move option.
When selected, move or copy commands are
written to the log file specified in the
Command File Name field.
If the MIGRATE_CONTENT method fails with an error, the entire batch transaction
is rolled back. If the destination has content files that were created from the
successful migrations within the batch, you can clean up those files running the
Dmfilescan job, as described in “Dmfilescan” on page 392.
15.3.2.10 PURGE_CONTENT
The PURGE_CONTENT administration method marks a content file as offline and
deletes the file from its storage area. The method does not back up the file before
deleting it; make sure that you have archived the file before running
PURGE_CONTENT on it.
15.3.2.11 REPLICATE
The REPLICATE administration method copies content files from one component of
a distributed storage area to another. This task is normally performed by the
Content Replication tool or by the Surrogate Get feature. Use the REPLICATE
administration method as a manual backup to Content Replication and Surrogate
Get.
The REPLICATE method returns TRUE if the operation succeeds or FALSE if it fails.
You must have system administrator or superuser privileges to use the REPLICATE
method.
15.3.2.12 RESTORE_CONTENT
The RESTORE_CONTENT administration method restores an offline content file to
its original storage area. It operates on one file at a time. If you need to restore more
than one file at a time, use the API Restore method.
You can use RESTORE_CONTENT only when one server handles both data and
content requests. If your configuration uses separate servers for data requests and
content file requests, you must issue a Connect method that bypasses the
Documentum Server to use RESTORE_CONTENT in the session.
15.3.2.13 SET_STORAGE_STATE
The SET_STORAGE_STATE administration method changes the state of a storage
area. A storage area is in one of three states:
• On line
An on-line storage area can be read and written to.
• Off line
An off-line storage area cannot be read or written to.
• Read only
A read-only storage area can be read, but not written to.
You can use SET_STORAGE_STATE only when one server handles both data and
content requests. If your configuration uses separate servers for data requests and
content file requests, you must issue a Connect method that bypasses the content file
server to use SET_STORAGE_STATE in the session.
15.3.2.14 DB_STATS
The DB_STATS administration method displays statistics about database operations
for the current session. The statistics are counts of the numbers of:
15.3.2.15 DROP_INDEX
The DROP_INDEX administration method destroys a user-defined index on an
object type. The DROP_INDEX method returns TRUE if the operation succeeds or
FALSE if it fails. You must have superuser privileges to run the DROP_INDEX
administration method.
15.3.2.16 EXEC_SQL
The EXEC_SQL administration method executes SQL statements, with the exception
of SQL Select statements.
The EXEC_SQL method returns TRUE if the operation succeeds or FALSE if it fails.
• If you use the Apply method to execute the method and the query contains
commas, you must enclose the entire query in single quotes.
15.3.2.17 FINISH_INDEX_MOVES
The FINISH_INDEX_MOVES administration method completes unfinished object
type index moves. The FINISH_INDEX_MOVES method returns TRUE if the
operation succeeds or FALSE if it fails. You must have superuser privileges to run
the FINISH_INDEX_MOVES administration method.
15.3.2.18 GENERATE_PARTITION_SCHEME_SQL
The GENERATE_PARTITION_SCHEME_SQL administration method is available to
administrators and superusers. These additional restrictions apply:
Running the method generates a script, which can then be run to partition the
repository. The GENERATE_PARTITION_SCHEME_SQL administration method
has three options:
Parameter Description
Operation Select an operation from the dropdown list
box to define the subcommand. The options
are:
• DB_PARTITION: Generates a script to
upgrade or convert a repository to a 6.5
partitioned repository. If selected:
– Select Partition Type or Table Name.
– If Table Name is defined, optionally
define the Owner Name.
– Include object type is optional. Select
to apply the partition operation to the
dmi_object_type table.
– Last Partition and Last Tablespace are
optional.
– In the Partitions section, Partition
Name, Range, and Tablespace are
required.
• ADD_PARTITION: Generates a script to
add a partition to a partitioned type. If
selected:
– Select Partition Type or Table Name.
– If Table Name is defined, optionally
define the Owner Name.
– Include object type is optional. Select
to apply the partition operation to the
dmi_object_type table.
– In the Partitions section, Partition
Name, Range, and Tablespace are
required.
• EXCHANGE_PARTITION: Generates a
script for bulk ingestion by loading data
from an intermediate table into a new
partition of a partitioned table. If selected:
– Partition Type and Table Name are
mutually exclusive.
– If Table Name is defined, optionally
define the Owner Name.
– Include object type is optional. Select
to apply the partition operation to the
dmi_object_type table.
– Partition Name, Range, and
Tablespace are required.
– Temp Table Suffix is optional.
Parameter Description
Partition Type Select a partition type from the dropdown
list box, which displays a list of the partition
types available for the repository. All is the
default for DB_PARTITION and
ADD_PARTITION, but is not available for
EXCHANGE_PARTITION. If you select
Partition Type, then you cannot select Table
Name.
Table Name Type a table name. If you select Table Name,
then you cannot select Partition Type.
Include object type Optionally, select to apply the partition
operation to the dmi_object_type table.
Owner Name Type an owner name. This field is enabled
only if Table Name is selected.
Last Partition Optionally, type a name for the last partition.
This field appears only when
DB_PARTITION is selected as the operation.
Last Tablespace Optionally, type a tablespace name for the
last partition. This field appears only when
DB_PARTITION is selected as the operation.
Partition Name Type a name for the partition. For
DB_PARTITION and ADD_PARTITION
operations, you must first click Add in the
Partitions section to add information for
each partition.
Range Type the upper limit for the partition key
range. For DB_PARTITION and
ADD_PARTITION operations, you must first
click Add in the Partitions section to add
information for each partition.
Tablespace Type the partition tablespace name. If not
specified, the default tablespace is used. For
DB_PARTITION and ADD_PARTITION
operations, you must first click Add in the
Partitions section to add information for
each partition.
Temp Table Suffix Type a temporary table suffix. This field is
enabled and optional only if
EXCHANGE_PARTITION is selected as the
operation.
15.3.2.19 MAKE_INDEX
The MAKE_INDEX administration method creates an index for any persistent object
type. You can specify one or more properties on which to build the index. If you
specify multiple properties, you must specify all single-valued properties or all
repeating properties. Also, if you specify multiple properties, the sort order within
the index corresponds to the order in which the properties are specified in the
statement. You can also set an option to create a global index.
15.3.2.20 MOVE_INDEX
The MOVE_INDEX administration method moves an existing object type index
from one tablespace or segment to another. The MOVE_INDEX method returns
TRUE if the operation succeeds or FALSE if it fails. You must have system
administrator or superuser privileges to run the MOVE_INDEX administration
method.
15.3.2.21 ESTIMATE_SEARCH
The ESTIMATE_SEARCH administration method returns the number of results
matching a particular full-text search condition.
• The exact number of matches that satisfy the SEARCH condition, if the user
running the method is a superuser or there are more than 25 matches.
• The number 25 if there are 0-25 matches and the user running the method is not a
superuser.
• The number -1 if there is an error during execution of the method.
Any user can execute this method. However, the permission level of user affects the
return value. The ESTIMATE_SEARCH administration method is not available, if
the connected repository is configured with the xPlore search engine.
15.3.2.22 MARK_FOR_RETRY
The MARK_FOR_RETRY administration method finds content that has a particular
negative update_count property value and marks such content as awaiting indexing.
Use MARK_FOR_RETRY at any time to mark content that failed indexing for retry.
Note that MARK_FOR_RETRY does not take the update_count argument.
15.3.2.23 MODIFY_TRACE
The MODIFY_TRACE administration method turns tracing on and off for full-text
indexing operations. The MODIFY_TRACE method returns TRUE if the operation
succeeds or FALSE if it fails. You must have system administrator or superuser
privileges to run the MODIFY_TRACE administration method.
15.3.2.24 GET_LAST_SQL
The GET_LAST_SQL administration method retrieves the SQL translation of the last
DQL statement issued. Any user can run GET_LAST_SQL.
15.3.2.25 LIST_RESOURCES
The LIST_RESOURCES administration method lists information about the server
and the operating system environment of server. You must have system
administrator or superuser privileges to run the LIST_RESOURCES administration
method.
15.3.2.26 LIST_TARGETS
The LIST_TARGETS administration method lists the connection brokers to which
the server is currently projecting. Additionally, it displays the projection port,
proximity value, and connection broker status for each connection broker, as well as
whether the connection broker is set (in server.ini or the server configuration object).
Any user can run LIST_TARGETS.
15.3.2.27 SET_OPTIONS
The SET_OPTIONS administration method turns tracing options on or off. You can
set the following options:
Option Action
clean Removes the files from the server common
area.
crypto_trace Cryptography information.
debug Traces session shutdown, change check,
launch and fork information.
docbroker_trace Traces connection broker information.
i18n_trace Traces client session locale and codepage. An
entry is logged identifying the session locale
and client code page whenever a session is
started.
Option Action
last_sql_trace Traces the SQL translation of the last DQL
statement issued before access violation and
exception errors.
15.3.2.28 RECOVER_AUTO_TASKS
Run the RECOVER_AUTO_TASKS administration method to recover workflow
tasks that have been claimed, but not yet processed by a workflow agent associated
with a failed Documentum Server.
If a Documentum Server fails, its workflow agent is also stopped. When the server is
restarted, the workflow agent recognizes and processes any work items it had
claimed but not processed before the failure. However, if you cannot restart the
Documentum Server that failed, you must recover those work items already claimed
by its associated workflow agent so that another workflow agent can process them.
The RECOVER_AUTO_TASKS administration method performs that recovery.
15.3.2.29 WORKFLOW_AGENT_MANAGEMENT
Run the WORKFLOW_AGENT_MANAGEMENT method to start and shut down a
workflow agent.
If the workflow agent startup or shutdown process fails, the Administration Method
Results page displays an error message indicating the process failure and provides
additional information. There several reasons why a workflow agent startup or
shutdown process can fail:
• The Documentum Server projects to a connection broker that is not listed in the
dfc.properties of the client running Documentum Administrator.
If the repository is not reachable, the Parameters page displays the Workflow
Agent Current Status as Unknown.
15.3.2.30 REGEN_EXPRESSIONS
By default, dmbasic expression generates the INTEGER data type. Use the following
procedure to convert INTEGER to LONG:
For Linux:
dmbasic -f $DM_HOME/install/admin/dm_recreate_expr.ebs
-e Recreate -- <repository> <username> <password> all
To generate LONG data type for dmscript of a particular activity, use the following
IAPI command:
apply,c,<activity_id>,REGEN_EXPRESSIONS
15.4 Jobs
Jobs are repository objects that automate method object execution. Methods
associated with jobs are executed automatically on a user-defined schedule. The
properties of a job define the execution schedule and turn execution on or off. Jobs
are invoked by the agent exec process, a process installed with Documentum Server.
At regular intervals, the agent exec process examines the job objects in the repository
and runs those jobs that are ready for execution. Any user can create jobs.
You can create additional jobs to automate the execution of any method and you can
modify the schedule for executing existing jobs.
The Asynchronous Write job polls for content still in a parked state and generates
new messages for the DMS server to pass to BOCS to request the upload of the
parked content. After execution, the job lists all content objects that had yet to be
moved from the parked state and for which messages were sent to the DMS server.
If a BOCS server receives a request to migrate content that is has already processed,
it will ignore the request.
This job is inactive by default, but should be enabled whenever asynchronous mode
is allowed. The job is scheduled to run daily at 2:00 a.m. by default.
The tool runs under the Documentum installation owner account to execute the
dm_AuditMgt job. The job uses the PURGE_AUDIT administration method to
remove the audit trail entries from the repository. Consequently, to use this tool, the
Documentum installation owner must have Purge Audit privileges. All executions
of the tool are audited. The generated audit trail entry has the event name
dm_purgeaudit.
To choose the objects to remove from the subset selected by cutoff_days, change the
custom_predicate argument. By default, the custom predicate includes three
conditions:
• delete_flag=TRUE
• dequeued_date=value (value is computed using the cutoff_days argument)
• r_gen_source=1
You cannot change the first two conditions. The third condition, r_gen_source=1,
directs the server to delete only audit trail objects generated by system-defined
events. If you want to remove only audit trail objects generated by user-defined
events, reset this to r_gen_source=0. If you want to remove audit trail objects
generated by both system- and user-defined events, remove the r_gen_source
expression from the custom predicate.
You may also add other conditions to the default custom predicate. If you add a
condition that specifies a string constant as a value, you must enclose the value in
two single quotes on each side. For example, suppose you want to remove only
audit trail entries that record dm_checkin events. To do so, add the following to the
custom_predicate:
event_name=''dm_checkin''
dm_checkin is enclosed by two single quotes on each side. Do not use double
quotes. These must be two single quotes.
The Audit Management tool generates a status report that lists the deleted
dm_audittrail entries. The report is saved in the repository in /System/Sysadmin/
Reports.
If an error occurs while the tool is executing, the server sends email and inbox
notification to the user specified by the -auditperson argument.
The Audit Management tool is installed in the inactive state. The first time you
execute the tool, it may take a long time to complete.
15.4.1.4.1 Arguments
“Audit management arguments” on page 371, lists the arguments to the Audit
Management tool.
The qualification
must be a valid
qualification and can
reference only audit
trail object type
properties. For
example, a valid
qualification is
event=‘approved’ or
name=‘dmadmin’.
Refer to the general
discussion of the tool
for details of setting
this argument.
TheOpenText
Documentum Server
DQL Reference Guide
for complete
information about
WHERE clause
qualifications.
-cutoff_days integer 90 A minimum age, in
days, for objects to
delete. All audit trail
objects older than the
specified number of
days and that meet
the specified
qualification are
deleted.
15.4.1.4.2 Guidelines
Audit trail entries are the result of audited events. The more events you audit, the
more audit trail entries are generated in a fixed period of time.
You must decide if there are any reasons to keep or maintain audit trail entries. For
example, you may want to keep certain items for traceability purposes. If so, leave
this tool inactive or set the cutoff_days argument to a value that will save the audit
trail items for a specified length of time.
Appendix D, Consistency checks on page 659, lists the consistency checks conducted
by the tool and the error number assigned to each.
The job generates a report that lists the categories checked and any inconsistencies
found. The report is saved to the repository in /System/Sysadmin/Reports/
ConsistencyChecker. If no errors are found, the current report overwrites the
previous report. If an error is found, the current report is saved as a new version of
the previous report.
It is recommended that you run this tool on a repository before upgrading the
repository to a new version of the Documentum Server.
repository_name is the name of the repository against which you are running the
consistency checker, superuser is the user name of a repository superuser, and
password is the password for the account of superuser.
When you run the consistency checker from the command line, the results of the
checks are directed to standard output.
15.4.1.5.2 Arguments
Here is a sample of a Consistency Checker report. This run of the tool found X
inconsistencies.
Beginning Consistency Checks.....
#################################################
##
## CONSISTENCY_CHECK: Users & Groups
##
## Start Time: 09-10-2002 10:15:55
##
##
#################################################
##################################################
##
##
## CONSISTENCY_CHECK: ACLs ##
##
## Start Time: 09-10-2002 10:15:55
##
##
##################################################
################################################
##
##
## CONSISTENCY_CHECK: Sysobjects
##
##
## Start Time: 09-10-2002 10:15:58
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Folders and Cabinets
##
## Start Time: 09-10-2002 10:16:02
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Documents
##
##
## Start Time: 09-10-2002 10:16:03
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Content
##
## Start Time: 09-10-2002 10:16:03
##
##
##
#################################################
#################################################
##
##
## CONSISTENCY_CHECK: Workflow
##
##
## Start Time: 09-10-2002 10:16:03
##
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: Types
##
## Start Time: 09-10-2002 10:16:04
##
##
################################################
#################################################
##
##
## CONSISTENCY_CHECK: Data Dictionary
##
## Start Time: 09-10-2002 10:16:04
##
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: Lifecycles
##
## Start Time: 09-10-2002 10:16:11
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: FullText
##
## Start Time: 09-10-2002 10:16:11
##
################################################
#################################################
##
##
## CONSISTENCY_CHECK: Indices
##
## Start Time: 09-10-2002 10:16:11
##
#################################################
################################################
##
##
## CONSISTENCY_CHECK: Methods
##
## Start Time: 09-10-2002 10:16:11
##
################################################
By default, the tool processes the content files in batches. It retrieves up to 500
content files (the default batch size) and releases resources in the source database
before replicating the files. You can adjust the size of the batches by setting the -
batch_size argument. Each execution of the job may process multiple batches,
depending on the number of content files to be replicated and the batch size.
If the -batch_size argument is set to any value greater than 1 and content transfer
operations fail for the whole batch, the job exits and displays an error message.
The job uses a login ticket to connect to each source server. If you include the -
source_servers argument, the job connects only to the servers in the list. If you do
not include that argument, the job attempts to connect to each server in the
repository.
Note: The clocks on the host machines of the source servers must be using
UTC time and must be synchronized with the host machine on which the job
runs. The login ticket for the job is valid for 5 minutes. If the clocks are not
synchronized or the machines are using times set to different time zones, the
source server to which the job is connecting may determine that the ticket has
timed out and the job will fail.
A content replication job looks for all content not locally present, gets the files while
connected to other sites, and performs an IMPORT_REPLICA for each content file in
need of replication. The job generates a report that lists each object replicated. The
report is saved to the repository in /System/Sysadmin/Reports/ContentReplication.
Note: If the report was run against the content at a remote distributed site, the
report name has the server configuration name appended with name of site.
For example, if London is a remote site, its report would be found in /System/
Sysadmin/Reports/ContentReplicationLondon.
Installing the tool suite at a site creates a content replication job for the installation
site. In a distributed environment, the argument values of job for the remote sites are
based on those of the Content Replication job for the primary site, but the job name
and target server will be unique for each site. The job name has the format:
dm_ContentReplicationserverconfig.object_name
The target_server property of job identifies the local server performing the
replication using the format repository.serverconfig@hostname.
15.4.1.6.1 Arguments
“Content replication arguments” on page 380, describes the arguments for the tool.
The argument
accepts a maximum
of 255 characters. The
specified names are
recorded in the
method_arguments
property of job.
If this argument is
not included, the job
attempts to connect
to all other servers in
the repository.
The tool determines where the repository is storing its content and then uses
operating system commands to determine whether these disks are reaching the
specified threshold. When the disk space used meets or exceeds the value in the
percent_full argument of tool, a notification is sent to the specified queueperson and
a report is generated and saved to the repository in /System/Sysadmin/Reports/
ContentWarning.
Notes
• If the tool was run against the content at a remote distributed site, the report
name has the server configuration name appended with name of site. For
example, if London is a remote site, its report would be found in /System/
Sysadmin/Reports/ContentWarningLondon.
• The dm_ContentWarning job may not work properly when the disk space is
in terabytes scale.
15.4.1.7.1 Arguments
“Content warning arguments” on page 382, describes the arguments for the tool.
-Active 2,485,090
-Deleted 81,397
-Total 2,566,487
Object: test1_file_store2
Type : dm_filestore
Path : /export/nfs2-1/dmadmin/data/test1/storage_02
Object: support_file_store
Type : dm_filestore
Path : /export/nfs2-1/dmadmin/data/test1/docs_from_test2
Data Dictionary Publisher generates a status report that is saved in the repository
in /System/Sysadmin/Reports/DataDictionaryPublisher.
15.4.1.8.1 Arguments
“Data dictionary publisher argument” on page 384, describes the argument of tool.
Connected To sqlntX.sqlntX
Job Log for System Administration Tool
DataDictionaryPublisher
-----------------------------------------------
Start of log:
-------------
DataDictionaryPublisher Tool Completed at
11/8/2000 12:15:25. Total duration was 0 minutes.
Calling SetJobStatus function...
--- Start
c:\Documentum\dba\log\000145df\sysadmin\
DataDictionaryPublisherDoc.txt report output ----
DataDictionaryPublisher Report For repository sqlntX
As Of 11/8/2000 12:15:24
DataDictionaryPublisher utility syntax:
apply,c,NULL,EXECUTE_DATA_DICTIONARY_LOG
Executing DataDictionaryPublisher...
Report End 11/8/2000 12:15:25
--- End
c:\Documentum\dba\log\000145df\sysadmin\
DataDictionaryPublisherDoc.txt report output ---
The tool also recreates any indexes that are identified by dmi_index objects but not
found in the database.
Note: The Database Space Warning Tool is not needed, and therefore not
installed, for installations running against SQL Server.
If the tool finds that the space has reached the limit specified in the percent_full
argument of tool, it sends a notification to the user specified in queueperson. When
it sends a notification, it also includes a message about any RDBMS tables that are
fragmented beyond the limits specified in the max_extents argument and a message
regarding indexes, if it does not find the expected number in the RDBMS. The
notifications are sent to the Inbox of user repository and through email.
In addition to these notifications, the tool generates a status report that is saved in
the repository in /System/Sysadmin/Reports/DBWarning.
For Sybase, you must set the ddl in tran database option to TRUE to run this job. The
isql syntax is:
sp_dboption dbname, "ddl in tran", true
15.4.1.9.1 Arguments
“Database space warning arguments” on page 385, lists the arguments of tool.
Here is a sample of a Database Space Warning report. It shows the total number of
blocks allocated for the database, how many are currently used for tables and
indexes, the percentage used of the total allocated, and the number of free blocks. It
also lists the number of fragments for all tables and indexes with more than
max_extents fragments and lists the number of Documentum indexes in the
repository.
Database Block Allocation
Table Index TotalUsed Free Total %Used/Total
620 647 1,267 81,929 83,196 2
Here is a sample Inbox message sent by the Database Space Warning tool:
Take a look at your DBMS tablespace--it's 90% full!
You have 8 fragmented tables in your DBMS instance
--you may want to correct this!
You are missing some Documentum indexes-contact Support!
DOCBASE: test1
EVENT: FraggedTables
NAME: DBWarning.Doc
SENT BY: dmadmin
TASK NAME: event
MESSAGE:
You have 18 fragmented tables in your DBMS instance
--you may want to correct this!
15.4.1.11 Archive
The Archive tool automates archive and restore between content areas. Archive
older or infrequently accessed documents to free up disk space for newer or more
frequently used documents. Restore archived documents to make the archived
documents available when users request them. For complete information on
configuring archiving, refer to “Archiving and restoring documents” on page 545.
15.4.1.11.1 Arguments
“Archive arguments” on page 388 describes the arguments for the tool
15.4.1.11.2 Guidelines
The Archive tool puts all the documents it is archiving in one dump file. This means
you must move the entire dump file back to the archive directory to restore a single
document in the file. If the files are extremely large, this can be a significant
performance hit for restore operations.
15.4.1.12 Dmclean
The Dmclean tool automates the dmclean utility. The utility scans the repository for
orphaned content objects and content objects. The utility generates a script to
remove these orphans. The Dmclean tool performs the operations of dmclean and
(optionally) runs the generated script.
When the agent exec program invokes the script, the tool generates a report showing
what content objects and content files are removed upon execution of the generated
script. The status report is saved in /System/Sysadmin/Reports/DMClean.
Note: If the tool was run against the content at a remote distributed site, the
report name has the server configuration name is appended with the name of
the site. For example, if London is a remote site, its report would be found in /
System/Sysadmin/Reports/DMCleanLondon.
15.4.1.12.1 Arguments
“Dmclean arguments” on page 390, describes the arguments for the tool.
15.4.1.12.2 Guidelines
If you are using distributed content, Dmclean requires the default storage area for
dm_sysobjects to be the distributed store.
Executing DMClean...
All Clean contents were successfully removed.
Generated script from the DMClean method:
----- Start
C:\Documentum\dba\log\000003e8\sysadmin\
080003e8800005d6.bat
output ------------------------
# Opening document base testdoc...
# Total shared memory size used: 1554112 bytes
# Making /System cabinet.
# /System cabinet exists.
# Making /Temp cabinet.
# /Temp cabinet exists.
# Making /System/Methods folder.
# /System/Methods folder exists.
# Making /System/FileSystem folder.
# /System/FileSystem folder exists.
# Making /System/DataDictionary folder.
# /System/DataDictionary folder exists.
# Making /System/Procedures folder.
# /System/Procedures folder exists.
# Making /System/Procedures/Actions folder.
# /System/Procedures/Actions folder exists.
# Making /System/Distributed References folder.
# /System/Distributed References folder exists.
# Making /System/Distributed References/Links
folder.
# /System/Distributed References/Links folder
exists.
# Making /System/Distributed References/Checkout
folder.
# /System/Distributed References/Checkout folder
exists.
# Making /System/Distributed References/Assemblies
folder.
# /System/Distributed References/Assemblies folder
exists.
# Making /System/Distributed References/Workflow
folder.
# /System/Distributed References/Workflow folder
exists.
# Making /System/Distributed References/VDM folder.
# /System/Distributed References/VDM folder exists.
# Making docbase config object.
# Making server configuration object.
#
# dmclean cleans up orphan content objects and content files.
# Instead of immediately destroying the orphan content objects,
# dmclean generates an API script, which can
# be used for verification before cleanup actually
# happens.
# This is done in this manner because deleted content
# objects by mistake are difficult to recover.
#
15.4.1.13 Dmfilescan
The Dmfilescan tool automates the dmfilescan utility. This utility scans a specific
storage area or all storage areas for any content files that do not have associated
content objects and generates a script to remove any that it finds. The tool executes
the generated script by default, but you can override the default with an argument.
“dmfilescan utility” on page 541 provides detailed information about the dmfilescan
utility.
Dmfilescan also generates a status report that lists the files it has removed. The
report is saved in the repository in /System/Sysadmin/Reports/DMFilescan.
Note: If the tool was run against the content at a remote distributed site, the
report name has the server configuration name appended with name of site.
For example, if London is a remote site, its report would be found in /System/
Sysadmin/Reports/DMFilescanLondon.
15.4.1.13.1 Arguments
“Dmfilescan arguments” on page 393, lists the arguments for the tool. Refer to the
description of the dmfilescan utility in “dmfilescan utility” on page 541, for
instructions on specifying values for the -from and -to arguments.
15.4.1.13.2 Guidelines
Typically, if you run the Dmclean tool regularly, it is not necessary to run the
Dmfilescan tool more than once a year. By default, the tool removes all orphaned
files from the specified directory or directories that are older than 24 hours. If you
wish to remove orphaned files younger than 24 hours, you can set the -force_delete
flag to T (TRUE). However, this flag is intended for use only when you must remove
younger files to clear diskspace or to remove temporary dump files created on the
target that were not removed automatically. If you execute Dmfilescan with -
force_delete set to T, make sure that there are no other processes or sessions creating
objects in the repository at the time the job executes.
If you are using distributed content, dmfilescan requires the default storage area for
dm_sysobjects to be the distributed store.
15.4.1.14.1 Arguments
• docbase_name
• user_name
• password
• method_trace_level
• (optional) folderQual: ID of the folder or query.
• (optional) report_only_mode: Valid values are True or False. Default is True,
which means only a report is generated and the corrupted folders are not fixed.
If a document must be recreated, these reports identify which files must be restored
to rebuild the document. The system administrator matches lost documents to the
file names so that the content files can be recovered. This feature is especially useful
for restoring a single document (or a small set of documents) to a previous state,
which cannot be done from database backups.
The File Report tool as installed, runs a full report once a week against all file storage
areas in the repository. It is possible to run incremental reports and reports that only
examine a subset of the storage areas for the repository. “Creating incremental or
partial-repository reports” on page 399 provides instructions to set up reports.
Note: File Report only provides a mechanism for restoring the document
content. The document metadata must be restored manually.
15.4.1.20.1 Guidelines
Set up the File Report schedule on the same interval as the file system backups. For
example, if nightly backups are done, also run File Report nightly and store the
resulting report with the backups.
If your repository is so large that creating full reports is not practical or generates
cumbersome files, set up multiple jobs, each corresponding to a different storage
area.
• Creating new file report jobs to create incremental reports or reports for a subset
of storage areas.
• Using file reports to recover a document
If you include the -storage_area argument, the job generates a report on the
documents in the specified storage area.
If you include the -folder_name argument, the job generates a report on documents
in the specified folder.
To create a job that generates incremental reports or only reports on some storage
areas, copy an existing File Report job object and set its properties and arguments as
needed. Provide a new name for the copy that identifies it meaningfully.
“Creating a job” on page 436, provides instructions for creating new jobs, and
OpenText Documentum Server System Object Reference Guide contains a description of
the properties of dm_job object. You can create or copy a job using Documentum
Administrator.
1. Find the last backup file report with the document listed in it.
2. Find the name(s) of the content file(s) which comprise the document.
3. Restore the named files from your file system backups to the original storage
area.
This restores the content files to the storage area directory. This does not restore
the documents to the repository. Until the files are imported back into the
repository, they are treated as orphaned content files and will be removed by
the next invocation of the dm_clean utility.
If you wish, you can move these files out of the storage area directories to a
more appropriate work area in order to import them into the repository.
15.4.1.20.5 Arguments
“File report arguments” on page 400, lists the arguments for the tool.
directory_path/
file_name
• Document object_id
• Document folder_path and object_name
• Document owner
• Document modification date
• Document version
• Content format
• Content page number
• Content file name
/u120/install/dmadmin/data/rogbase2/content_storage_01
/000015b4/80/00/01/49.doc
100015b4800001b5 /roger/research/newproject_1 roger
4/26/95 19:07:22 1.3 msw6 1
/u120/install/dmadmin/data/rogbase2/content_storage_01
/000015b4/80/00/01/4a.doc
The following sample report describes a single-page Word document with a PDF
rendition:
090015b4800004f1 /roger/research/newproject_2
roger 6/16/95 20:00:47 1.7 msw6 0
/u120/install/dmadmin/data/rogbase2/content_storage_01
/000015b4/80/00/02/52.txt
090015b4800004f1 /roger/research/newproject_2 roger
6/16/95 20:00:47 1.7 pdf 0
/u120/install/dmadmin/data/rogbase2/
content_storage_01/000015b4/80/00/02/6e.pdf
• Running the Group Rename tool immediately after you identify the new name
• Queuing the operation until the next scheduled execution of the Group Rename
tool
You cannot use a set method to change a group name. You must go through
Documentum Administrator and either a manual or automatic Group Rename
execution to change a group name. The Group Rename tool generates a report that
lists the changes made to the repository objects for the group rename. The report is
saved in the repository in /System/Sysadmin/Reports/GroupRename. The tool is
installed in the inactive state.
15.4.1.21.1 Arguments
15.4.1.22 Dm_LDAPSynchronization
The dm_LDAPSynchronization tool finds the changes in the user and group
information in an LDAP directory server that have occurred since the last execution
of the tool and propagates those changes to the repository. If necessary, the tool
creates default folders and groups for new users. If there are mapped user or group
properties, those are also set.
• Import new users and groups in the directory server into the repository
• Rename users in the repository if their names changed in the directory server
• Rename groups in the repository if their names changed in the directory server
• Inactivate users in the repository that if they were deleted from the directory
server.
When using iPlanet, you must enable the changelog feature to use the inactivation
operation. Instructions for enabling the changelog feature are found in the iPlanet
documentation.
The dm_LDAPSynchronization tool requires the Java method server. Make sure that
the Java method server in your Documentum Server installation is running.
15.4.1.22.1 Arguments
If you do want to
import this attribute,
set the
dm_ldap_config.bind
_type to
bind_search_dn and
set the argument
import_user_ldap_dn
to FALSE.
OpenText
Documentum Server
System Object
Reference Guide
contains more
information.
“Explicitly specifying
LDAP servers in -
source_directory”
on page 407,
describes how specify
multiple servers
individually.
-window_interval integer 1440 Execution window
for the tool.
You can execute the dm_LDAPSynchronization tool manually from the command
line. The syntax is
java com.documentum.ldap.LDAPSync -docbase_name <repositoryname> -user_name
<superuser_login> -method_trace_level <integer> -full_sync true
Use the object name of LDAP server to identify the server in the -source_directory
argument. If you want to identify multiple servers, use the following syntax:
ldap_config_obj_name+ldap_config_obj_name{+ldap_config_obj_name}
where ldap_config_object_name is the object name value of the ldap config object for
the LDAP directory server. For example:
ldap_engr1+ldap_engr2+ldapQA
Result log files are generated by the execution of methods when the
SAVE_RESULTS argument of method is set. Result log files are stored in Temp/
Result.method_name.
Job log files are generated when a job is run. The job log file for tools contains the
trace file of job and the text of its report. Job log files are stored in Temp/Jobs/
job_name/log_file.
The lifecycle log files are generated when a lifecycle operation such as promote or
demote occurs. The files are named bp_transition_*.log or bp_schedule_*.log,
depending on the operation. They are stored in %\DOCUMENTUM%\dba\log
\repository_id\bp ($DOCUMENTUM/dba/log/repository_id/bp).
Files are considered old and are deleted if they were modified prior to a user-
defined cutoff date. By default, the cutoff date is 30 days prior to the current date.
For instance, if you run Log Purge on July 27, all log files that were modified before
June 28 are deleted. You can change the cutoff interval by setting the -cutoff_days
argument for the tool. “Arguments” on page 409 provides instructions.
Log Purge generates a report that lists all directories searched and the files that were
deleted. The report is saved in the repository in /System/Sysadmin/Reports/
LogPurge.
Note: If the tool is run at a remote distributed site, the report name has the
server configuration name appended with name of site. For example, if London
15.4.1.23.1 Arguments
“Log purge arguments” on page 409, lists the arguments for the tool.
15.4.1.23.2 Guidelines
Your business rules will determine how long you keep old log files and result log
files. However, we recommend that you keep them at least 1 month as you may
need them to debug a problem or to monitor the result of a method or job.
We recommend that you run this tool daily. This ensures that your repository never
has log files older than the number of days specified in the cutoff_days argument.
The following is a sample of a Log Purge report. Its start_date property is set to June
3, 1996. The cutoff_days argument is set to 30 so that all logs older than 30 days will
be deleted. The report looks for server and connection broker logs, session logs, and
result logs from method objects and job objects (older than 30 days) and destroys
them.
LogPurge Report For DocBase boston2
As Of 7/25/96 7:18:09 PM
Parameters for removing Logs:
-----------------------------
- Inbox messages will be queued to boston2
- Logs older than 30 days will be removed...
/u106/dm/dmadmin/dba/log/00000962/tuser2
Removing /u106/dm/dmadmin/dba/log/00000962/tuser2/
0100096280000e73
Changing directory to:
/u106/dm/dmadmin/dba/log/00000962/sysadmin
Changing directory to:
/u106/dm/dmadmin/dba/log/00000962/tuser5
Looking for result logs from dm_method objects...
Destroying Result.users_logged_in object
Destroying Result.users_logged_in object
Destroying Result.dmclean object
Destroying Result.dmclean object
Destroying Result.dmclean object
Destroying Result.dmclean object
Destroying Result.users_logged_in object
Destroying Result.users_logged_in object
Looking for result logs from dm_job objects...
Destroying 06/01/96 11:40:27 boston1 object
Destroying 05/31/96 11:40:41 boston1 object
Destroying 06/02/96 11:40:30 boston1 object
Destroying 06/03/96 11:40:15 boston1 object
Destroying 06/04/96 11:40:03 boston1 object
Destroying 06/19/96 11:40:00 boston1 object
Destroying 06/20/96 11:40:55 boston1 object
Destroying 06/22/96 11:40:32 boston1 object
Destroying 06/21/96 11:40:34 boston1 object
Destroying 06/24/96 11:40:10 boston1 object
Destroying 06/23/96 11:40:24 boston1 object
Destroying 06/25/96 11:40:07 boston1 object
Destroying 06/13/96 11:40:08 boston1 object
Destroying 06/16/96 11:40:18 boston1 object
Destroying 06/14/96 11:40:28 boston1 object
Destroying 06/15/96 11:40:27 boston1 object
Destroying 06/17/96 11:40:12 boston1 object
Destroying 06/18/96 11:40:07 boston1 object
Destroying 06/12/96 11:40:06 boston1 object
Destroying 06/11/96 11:40:11 boston1 object
Destroying 06/09/96 11:40:46 boston1 object
Destroying 06/08/96 11:40:03 boston1 object
Destroying 06/10/96 11:40:29 boston1 object
Destroying 06/06/96 11:40:01 boston1 object
Destroying 06/07/96 11:40:55 boston1 object
Destroying 06/05/96 11:40:02 boston1 object
Destroying 05/29/96 11:40:06 boston1 object
Destroying 05/30/96 11:40:49 boston1 object
example, the tool could delete all dequeued dmi_queue_items that are older than 30
days and were queued to a specific user.
The tool generates a status report that provides you with a list of the deleted
dmi_queue_items. The report is saved in the repository in /System/Sysadmin/
Reports/QueueMgt.
If there is an error in the execution of tool, an email and Inbox notification is sent to
the user specified by the -queueperson argument.
15.4.1.24.1 Arguments
“Queue management arguments” on page 412, lists the arguments for the tool.
The qualification
must be a valid
qualification and
must work against
the dmi_queue_item
object. For example, a
valid qualification is
“event=‘APPROVED’
” or
“name=‘dmadmin’”.
To include all
dequeued items in
the search, set this
value to zero (0).
-queueperson string(32) - User who receives
email and inbox
notifications from the
tool. The default is
the user specified in
the operator_name
property of the server
configuration object.
-window_interval integer 120 Execution window
for the tool.
Note: The tool creates a base qualification that contains two conditions:
• delete_flag = TRUE
15.4.1.24.2 Guidelines
Dequeued items are the result of moving objects out of an inbox. Objects are placed
in inboxes by workflows, event notifications, archive and restore requests, or explicit
queue methods. Objects are moved out of an inbox when they are completed or
delegated.
You must decide if there are any reasons to keep or maintain dequeued items. For
example, you may want to keep dequeued items for auditing purposes. If so, leave
this tool inactive or set the cutoff_days argument to a value that will save the
dequeued items for a specified length of time.
Note: The first time you execute the Queue Management tool, it may take a
long time to complete if dequeued items have never been deleted before.
After the tool runs the method to find the objects, it uses a destroy method to
remove them from the repository.
The tool generates a status report that provides you with a list of the deleted objects.
The report is saved in the repository in /System/Sysadmin/Reports/
RemoveExpiredRetnObjects. For each deleted object, the report lists the following
properties:
• r_object_id
• object_name
• a_storage_type
• r_creation_date
• retention_date
The retention_date property is a computed property.
15.4.1.25.1 Arguments
“Remove expired retention objects arguments” on page 415, describes the arguments
for the tool.
15.4.1.25.2 Guidelines
A Centera or NetApp SnapLock storage area can have three possible retention
period configurations:
• The storage area may allow but not require a retention period.
In this case, the a_retention_attr name property is set , but the
a_retention_attr_req is set to F.
By default, the method does not include objects whose content has a 0 retention
period because the assumption is that such content is meant to be kept forever.
However, in a storage area that allows but does not require a retention period, a 0
retention period can be result from two possible causes:
• The user deliberately set no retention period, and consequently, the server set the
retention period to 0
• The user specified a retention date that had already elapsed. When this occurs,
the server sets the retention period to 0.
Because the meaning of 0 is ambiguous in such storage areas, the tool supports the
INCLUDE_ZERO_RETENTION_OBJECTS argument to allow you to include
content with a zero retention in storage areas that allow but do not require a
retention period.
The OpenText Documentum Server DQL Reference Guide contains information on the
underlying method.
The arguments of the tool define which renditions are removed. The tool can delete
renditions based on their age, format, or source (client- or server-generated). The
tool removes the content objects associated with unwanted renditions. The next
execution of the Dmclean tool automatically removes the orphaned content files
(assuming that clean_content argument of Dmclean is set to TRUE) of rendition.
Note: Renditions with the page modifier dm_sig_template are never removed
by Rendition Manager. These renditions are electronic signature page
templates; they support the electronic signature feature available with Trusted
Content Services.
The report generated by the tool lists the renditions targeted for removal. The report
is saved in the repository in /System/Sysadmin/Reports/RenditionMgt.
15.4.1.26.1 Arguments
“Rendition manager arguments” on page 417, describes the arguments for the
Rendition Manager tool.
Note: The
rendition
property in a
content object
is set to 3 when
a rendition is
added to
content with
the
keepRendition
argument in
the
addRendition
method set to T
(TRUE).
-keep_esignature Boolean TRUE Controls whether
renditions with the
page modifier
dm_sig_source are
removed.
T (TRUE) means to
keep those
renditions. F (FALSE)
means to remove
those renditions.
all
none
or a list of formats
If a list of formats is
specified, the format
names must be
separated by single
spaces.
all
none
or a list of formats
If a list of formats is
specified, the format
names must be
separated by single
spaces.
15.4.1.26.2 Guidelines
The Rendition Manager tool removes all renditions that meet the specified
conditions and are older than the specified number of days. For example, if you
execute this tool on July 30 and the -cutoff_days argument is set to 30, then all
renditions created or modified prior to June 30 are candidates for deletion. On July
31, all renditions created or modified before July 1 are removed.
We recommend that you run this tool with the -report_only argument set to TRUE
first to determine how much disk space renditions are using and what type of
renditions are in the repository. With -report_only set to TRUE, you can run the tool
several times, changing the other arguments each time to see how they affect the
results.
We recommend that you have a thorough knowledge of how and when renditions
are generated before removing any. For example, client renditions, such as those
added explicitly by an Addrendition method, may be difficult to reproduce if they
are deleted and then needed later. Documentum Server renditions are generated on
demand by the server and are generally easier to reproduce if needed.
To make sure that your repository never has rendition files older than the number of
days specified in the -cutoff_days argument, run this tool daily.
Note: The first time you execute the Rendition Manager tool, it may take a long
time to complete if the old renditions have never been deleted before.
Files are considered old and are deleted if they were modified prior to a user-
defined cutoff date. By default, the cutoff date is 30 days prior to the current date.
For instance, if you run SCS Log Purge on July 27, all log files that were modified
before June 28 are deleted. You can change the cutoff interval by setting the -
cutoff_days argument for the tool.
SCS Log Purge generates a report that lists all directories searched and the files that
were deleted. The report is saved in the repository in /System/Sysadmin/Reports/
SCSLogPurge.
15.4.1.28.1 Arguments
“State of the repository arguments” on page 422, describes the argument of tool.
Here is a sample report from the State of the Repository tool. The example shows all
of the sections in the report of tool, but truncates entries in some sections in the
interests of space.
StateOfDocbase Report For Docbase dm_master As Of 4/12/1999 09:26:32
Docbase Configuration:
Description:The Test Repository
Federation Name:<dm_master is not in a Federation>
Docbase ID:22000
Security Modeacl
Folder Security:On
Authorization Protocol:<Not defined>
Database:SQLServer
RDBMS Index Store:<Not defined>
Mac Access Protocol:none
Server Configuration:
Server Name:dm_master
Server Version:4.0 Win32.SQLServer7
Default ACL:Default ACL of User
Host Name:bottae1
Install Owner:dmadmin
Install Domain:bottae1
Operator Name:dm_master
Agent Launcher:agent_exec_method
Checkpoint Interval:300 seconds
Compound Integrity:On - Server enforces integrity for virtual documents
Turbo Backing Store:filestore_01
Rendition Backing Store:<Not defined>
Web Server Location:BOTTAE1
Web Server Port:80
Rightsite Image:/rs-bin/RightSit
Server Locations:
events_locationD:\DOCUMENTUM\share\data\events
common_locationD:\DOCUMENTUM\share\data\common
temp_locationD:\DOCUMENTUM\share\temp
log_locationD:\DOCUMENTUM\dba\log
system_converter_location D:\DOCUMENTUM\product\4.0\convert
user_converter_location<Not defined>
verity_locationD:\DOCUMENTUM\product\4.0\verity
user_validation_location <Not defined>
assume_user_locationD:\DOCUMENTUM\dba\dm_assume_user.exe
change_password_locationD:\DOCUMENTUM\dba\dm_change_password.exe
signature_chk_locD:\DOCUMENTUM\dba\dm_check_password.exe
stage_destroyer_location<Not defined>
Set Information:
CLASSPATH=C:\PROGRA~1\DOCUME~1\DFCRE40\lib\dfc.jar;C:\PROGRA~1\DOCUME~1\Classes;
C;\Netscape\ldapjava\packages
COLORS=white on blue
COMPUTERNAME=BOTTAE1
ComSpec=C:\WINNT\system32\cmd.exe
CVSROOT=godzilla.documentum.com:/docu/src/master
CVS_SERVER=cvs1-9
DM_HOME=D:\DOCUMENTUM\product\4.0
DOCUMENTUM=D:\DOCUMENTUM
HOMEDRIVE=C:
HOMEPATH=\
LOGONSERVER=\\BOTTAE1
NUMBER_OF_PROCESSORS=1
OS=Windows_NT
Os2LibPath=C:\WINNT\system32\os2\dll;
Path=D:\DOCUMENTUM\product\4.0\bin;C:\PROGRA~1\DOCUME~1\DFCRE40\bin;
D:\DOCUMENTUM\product\3.1\bin;C:\WINNT\system32;C:\WINNT;
c:\program files\maestro.nt;C:\PROGRA~1\DOCUME~1\Shared;
c:\SDK-Java.201\Bin;c:\SDK-Java.201\Bin\PackSign;c:\Winnt\Piper\dll;
c:\Program Files\DevStudio\SharedIDE\bin;
c:\Program Files\DevStudio\SharedIDE\bin\ide;D:\MSSQL7\BINN;
c:\cvs1.9;c:\csh\bin;c:\csh\samples;C:\DOCUME~1\RIGHTS~1\product\bin
PATHEXT=.COM;.EXE;.BAT;.CMD
PROCESSOR_ARCHITECTURE=x86
PROCESSOR_IDENTIFIER=x86 Family 5 Model 2 Stepping 5, GenuineIntel
PROCESSOR_LEVEL=5
PROCESSOR_REVISION=0205
PROMPT=$P$G
SystemDrive=C:
SystemRoot=C:\WINNT
TEMP=C:\TEMP
TMP=C:\TEMP
USERDOMAIN=BOTTAE1
USERNAME=dmadmin
USERPROFILE=C:\WINNT\Profiles\dmadmin
windir=C:\WINNT
adm_turbo_sizedbo 1:1:1
dm_federation_logdbo 15:7:3
dm_portinfodbo 1:1:1
dm_queuedbo 1:1:1
crtext 11
mdoc55 9
maker55 5
<NO CONTENT> 2
msw8template 2
vrf 2
wp6 2
excel5book 1
excel8book 1
excel8template 1
ms_access7 1
ms_access8_mde 1
msw6 1
msw8 1
powerpoint 1
ppt8 1
ppt8_template 1
ustn 1
Total: -------------
44
filestore_01 40
dm_turbo_store 4
Total: -------------
44
FormatLargestAverageTotal
mdoc5515885772
crtext 10115436
text19621254
vrf 192119238
maker553722114
FormatLargestAverageTotal
filestore_01
Largest Content: 196
Average Content: 31
Total Content: 2,092
dm_turbo_store
Largest Content: 8
Average Content: 5
Total Content: 34
------------------------
GTotal Content: 2,127
GTotal Rendition: (0.00% of total content)
ACL Summary:
Number of ACLs: 33
Number of Internal ACLs: 29
Number of External System ACLs: 4
Number of External Private ACLs:
Note: If the tool was run at a remote distributed site, the report name has the
server configuration name appended with name of site. For example, if London
is a remote site, its report would be found in /System/Sysadmin/Reports/
SwapInfoLondon.
15.4.1.29.1 Arguments
“Swap info arguments” on page 426, describes the arguments for the tool.
The format of the report generated by Swap Space Info varies by operating system.
Here is a sample:
SwapInfo Report For DocBase boston2
As Of 7/26/96 4:00:59 PM
Summary of Total Swap Space Usage and Availability:
When you run the tool against an Oracle or Sybase database, the tool uses a file that
contains commands to tweak the database query optimizer. For Oracle, the file is
named custom_oracle_stat.sql. For Sybase, it is named custom_sybase_stat.sql. The
file is stored in %DOCUMETNUM %\dba\config
\repository_name($DOCUMETNUM /dba/config/repository_name). You can add
commands to this file. However, do so with care. Adding to this file affects query
performance. If you do add a command, you can use multiple lines, but each
command must end with a semi-colon (;). You cannot insert comments into this file.
The -dbreindex argument controls whether the method also reorganizes database
tables in addition to computing statistics. For SQL Server, you can set the argument
to either READ or FIX. Setting it to READ generates a report on fragmented tables
but does not fix them. Setting it to FIX generates the report and fixes the tables. (In
either case, the report is included in the overall job report.)
For Sybase, the -dbreindex argument is only effective if set to FIX, to reorganize the
tables. Setting it to READ does not generate a report on Sybase. If you include the -
dbreindex argument set to FIX, the repository owner (the account under which the
tool runs) must have sa role privileges in the database.
The Update Statistics tool is installed in the active state, running once a week.
Because this tool can be CPU and disk-intensive, it is recommended that you run the
tool during off hours for database use. Consult with your RDBMS DBA to determine
an optimal schedule for this tool.
15.4.1.30.1 Arguments
“Update statistics arguments” on page 428, lists the arguments for the tool.
This is a required
argument on SQL
Server and Sybase. It
is set for the job when
the administration
tools are installed in
repositories running
against a SQL Server
or Sybase database.
-queueperson string(32) - User who receives
email and inbox
notifications from the
tool. The default is
the user specified in
the operator_name
property of the server
configuration object.
-window_interval integer 120 Execution window
for the tool.
15.4.1.30.2 Guidelines
When the job is run with -dbreindex set to READ and the statistics need updating,
the report will say:
-dbreindex READ. If rows appear below, the corresponding
tables are fragmented.
Change to -dbreindex FIX and rerun if you want to reindex
these tables.
When the job is run with -dbreindex set to FIX, the report will say:
-dbreindex FIX. If rows appear below, the corresponding
tables have been reindexed.
Change to -dbreindex READ if you do not want to reindex
in the future.
The Update Statistics report tells you when the tool was run and which tables were
updated. The report lists the update statistics commands that it runs in the order in
which they are run. Here is a sample of the report:
Update Statistics Report:
The dm_usageReport job runs monthly to generate a usage report. “Viewing job
reports” on page 456 contains the information to view the report. You can also
generate a current report. “Running jobs” on page 455 contains the information to
generate a current report.
The information stored in dm_usage_log can also be exported from the global
registry by using a utility program. Execute the following Java utility:
Where repository is the global registry name and password is the dm_report_user
password. Use your OS shell tools to redirect the output to a file.
• Execute the UserChgHomeDb tool immediately after you save the change
You cannot change the home repository of a user using a set method. When the
home repository of a user changes, the change must be cascaded to several other
objects that make use of it.
The UserChgHomeDb tool generates a report that lists the objects changed by the
operation. The report is saved in the repository in /System/Sysadmin/Reports/
UserChgHomeDb.
15.4.1.32.1 Arguments
• Execute the User Rename tool immediately after you save the change
• Queue the change, to be performed at the next scheduled execution of User
Rename.
You cannot change the name of a user name using the Set method. When the name
of a user changes, the change must be cascaded to many other objects that make use
of it.
The User Rename tool generates a report that lists the objects changed by the
operation. The report is saved in the repository in /System/Sysadmin/Reports/
UserRename.
15.4.1.33.1 Arguments
This sample report was generated when the tool was run in Report Only mode.
Job: dm_UserRename
Report For Repository example.db_1;
As Of 3/6/2000 12:00:27 PM
The arguments you define for the tool determine which versions are deleted. For
example, one argument (keep_slabels) lets you choose whether to delete versions
that have a symbolic label. Another argument (custom_predicate) lets you define a
WHERE clause qualification to define which versions are deleted. Refer to the
OpenText Documentum Server DQL Reference Guide for instructions on writing
WHERE clauses.
The Version Management tool generates a status report that is saved in the
repository in /System/Sysadmin/Reports/VersionMgt.
15.4.1.34.1 Arguments
“Version management arguments” on page 433, lists the arguments for the tool.
The qualification
must be a valid
qualification. Refer to
the OpenText
Documentum Server
DQL Reference Guide
for information about
WHERE clause
qualifications.
15.4.1.34.2 Guidelines
We recommend that you have a thorough knowledge of what prior versions mean
for your business requirements. If you need to keep all versions to satisfy auditing
requirements, do not run this tool at all. Individual users or departments may also
have needs and requirements for older versions. Needs for disk space may also
affect the decisions about how many older versions to keep.
Run this tool initially with the -report_only argument set to TRUE to determine how
much disk space versions are using and how many versions are in the repository.
With -report_only set to TRUE, you can run the report several times, changing the
other arguments each time to see how they affect the results.
If you are using the -cutoff days argument to ensure that your repository never has
only versions older than a specified number of days, run this tool daily. If you are
using -keep_latest argument to keep only a specified number of versions, you can
run this tool less frequently. The frequency will depend on how often new versions
are generated (thereby necessitating the removal of old versions to keep the number
of versions constant).
You can use this tool for one-time events also. For example, after a project is
completed, you might remove all older versions of the project documentation. You
must set the arguments appropriately for such occasions and reset them after the job
is finished.
Note: The first execution of the tool may take a long time to complete if the old
versions have never been deleted before.
The New Job and Job Properties pages are identical for standard jobs, replication
jobs, records migration jobs, remove expired retention objects, BOCS caching jobs,
and job sequences. The following topics contain the instructions on creating a
specific type of job:
Field Description
Name The name of the job object.
Job Type A label identifying the job type.
Field Description
Trace Level Controls how much information is recorded
in trace logs. May be set from 0 to 10.
“Viewing job trace logs” on page 456
contains the instructions on viewing trace
logs.
Designated Server When more than one server runs against a
repository, use to designate a server to run
the job. The default is Any Running Server.
State Determines how the job runs:
• If set to Active, the job runs as scheduled.
• If set to Inactive, the job does not run
automatically, but can be executed
manually.
Job Start Date Specifies when the job was started. This
option is a read-only field that only displays
for existing jobs.
Options
Deactivate on Failure Specifies whether to make the job inactive if
it does not run successfully.
Run After Update Specifies whether to run the job immediately
after any changes to the job are saved.
Save If Invalid Specifies whether to save the job object if
Documentum Administrator is unable to
validate the job.
Job Run History
Last Run Displays the last date and time the job ran
and was completed. This option is a read-
only field that displays for existing jobs.
Last Status Displays the last time the job completed and
the length of time the job took to run. This
option is a read-only field that displays for
existing jobs.
Last Return Code Displays the last value returned by the job.
This option is a read-only field that displays
for existing jobs.
Runs Completed Displays the number of times the job has run
to completion. This option is a read-only
field that displays for existing jobs.
Caution
Set up the schedules for replication jobs so that jobs for the same target
repository do not run at the same time. Running replication jobs
simultaneously to the same target repositories causes repository
corruption.
Field Description
Next Run Date and Time Specifies the next start date and time for the
job.
Create standard rules or custom rules on the New Job - Qualifier Rules or Job
Properties - Qualifier Rules page for content-addressable stores. There are no
restrictions on the number of conditions in a custom rule, but the length is limited to
255 characters.
Standard rules are limited to five selection criteria defined by choosing properties
from drop-down lists. The available properties are:
• Name
• Title
• Subject
• Authors
• Keywords
• Created
• Modified
• Accessed
After selecting a property, select an operand and type or select the correct value. For
example, two rules might be Name contains Linux and Created before January 1,
2004. When the job runs, the criteria are connected with AND, so that all criteria
must apply to a particular object for it to be deleted. If you require an OR for
example, Name contains Linux OR Created before January 1, 2004 use a custom
rule.
A custom rule is entered into a text box as a DQL WHERE clause. There are no
restrictions on the number of conditions in a custom rule, but the length is limited to
255 characters. Custom rules can be based on the values of any standard SysObject
properties, provided those values are present before an object is saved. For example,
a custom rule might be object_name="Test" or object_name="Delete". Custom
rules are not validated by Documentum Administrator.
The associated method object has properties that identify the executable and define
command line arguments and the execution parameters. For example, the
dm_DMClean job executes the dm_DMClean method. Some Documentum jobs
execute a specific method that cannot be changed.
If you assign a user-defined method to a job, that method must contain the code to
generate a job report. If you turn on tracing, only a DMCL trace is generated.
Field Description
Method Name Specifies the name of the method that is
associated with the job.
Field Description
Arguments Specifies the method arguments.
Field Description
Title A name or title for the object.
Subject A subject that describes the object.
Keywords One or more keywords that describe the
object. Click Edit to add, modify, remove, or
change the order of keywords.
Authors One or more authors associated with the
object. Click Edit to add, modify, remove, or
change the order of authors.
Owner Name The user who owns the object. Click Edit to
add or modify the owner name.
Version Label The version number of the object. Click Edit
to add, modify, remove, or change the order
of version numbers.
Checkout Date The date on which the object was last
checked out. This is a read-only property.
Checked Out By The name of the user who checked out the
object. This is a read-only property.
Created The date on which the object was created.
This is a read-only property.
Creator Name The name of the user who created the object.
This is a read-only property.
Modified The date on which the object was last
modified. This is a read-only property.
Modified By The name of the user who modified the
object. This is a read-only property.
Accessed The date on which the object was last
accessed. This is a read-only property.
If you are replicating objects from multiple source repositories into the same target
repository, or if you are replicating replica object, use a job sequence to designate the
order in which the jobs run so that they do not conflict with each other. “Creating
job sequences” on page 452 contains the information on creating job sequences.
The information and instructions in this section apply only to object replication, not
to content replication. You cannot configure content replication with Documentum
Administrator.
When you create a replication job, you must choose a replication mode and a
security mode. Each security mode behaves differently depending on which
replication mode you choose. In addition, replica objects in the target repository are
placed in different storage areas depending on which security mode you choose.
“Choosing replication and security modes” on page 447 contains information on
choosing replication and security modes.
“Creating replication jobs” on page 443 contains the instructions on how to access
the New Replication Job - Source page and create new replication jobs.
“Creating replication jobs” on page 443 contains the instructions on how to access
the New Replication Job - To Target page and create new replication jobs.
“Creating replication jobs” on page 443 contains the instructions on how to create
new replication jobs.
Field Description
Code Page Specifies the correct code page for the
replication job. Keep the value at the default,
UTF-8, unless it must be changed.
Caution
Fast replication does not replicate
all relationships. OpenText
Documentum Platform and Platform
Extensions Installation Guide
contains more information.
Field Description
Full Text Indexing Specifies the full-text indexing mode. Valid
options are:
• Use target repository settings for
indexing: The same documents are
indexed in the source and target.
• Do not index replicas: None of the
replicas are marked for indexing.
• Index all replicas: All replicas in a format
that can be indexed are marked for
indexing.
Replication Mode Specifies the replication mode.
Field Description
Transfer operator Specifies the user who manually transfers the
replication job.
Caution
The replication job creates a dump
file and a delete synchronization
file. Both files must be transferred
to the target. Always transfer the
dump file first.
• Federated mode, which can be used whether or not the source and target
repositories are in a federation.
• Non-federated mode, which is named external replication mode in the OpenText
Documentum Platform and Platform Extensions Installation Guide. This mode can be
used whether or not the source and target repositories are in a federation.
The security modes determine how a permission set is assigned to replica objects in
the target repository. The security modes are:
• Preserve
• Remap
When a records migration job runs, it generates a report that lists the criteria selected
for the job, the query built from the criteria, and the files selected for moving. You
can execute the job in report-only mode, so that the report is created but the files are
not actually moved.
Any user type can create a BOCS caching job. DQL queries for BOCS caching jobs
are not validated by Documentum Administrator.
“Creating BOCS caching jobs” on page 451 contains the instructions on how to
create a BOCS caching job.
Field Description
Object Type The type of objects needed to create caching
messages. By default, dm_sysobject is
selected.
Field Description
Network Location The destination list of the cached content.
Use a job sequence when jobs must run in a particular order or the periods of time in
which jobs run must not overlap. For example, if replication jobs replicate objects
from multiple source repositories to a single target repository or if replication jobs
replicate replica objects, use a job sequence to control the order in which the jobs
execute.
sequence. However, you are not prevented from selecting a job that is in the
active state. If you select a job that is in the active state, change its state to
inactive.
• All jobs in a job sequence must execute a method where there is a method success
code or method success status in the method object, and only such jobs are
displayed in the user interface when a job sequence is created. Before you create
a job sequence, examine the jobs you plan to include and the methods executed
by those jobs to make sure that a method success code or method success status
is present.
• Each job sequence must include at least one job that has no predecessors. This job
is the first job to run. There can be more than one job in the sequence with no
predecessors.
• The jobs in the sequence run in parallel except when a job has a predecessor.
Documentum Administrator ensures that there is no cyclic dependency.
Before you create a job sequence, obtain the username and password for a superuser
in each repository where the sequence runs a job.
Field Description
Job Repositories
Add Click Add to access the Choose Repositories
page.
Field Description
Repository The name of the repository where you want
to run the job. By default, the current
repository is listed with the currently-
connected superuser, but you are not
required to run any jobs in the current
repository.
User Name The login name for the repository.
Password The password for the repository.
Domain Specify the domain for any repository
running in domain-required mode.
Job Sequence Information
Add Click Add to add jobs to the sequence.
Access the Choose jobs dependency page by clicking Edit in the Job Sequence
Information section on Connection Info tab of the New Job Sequence or Job
Properties page.
Note: Each job sequence must include one job that has no predecessors. This
job is the first to run. The jobs in the sequence run in parallel except when a job
has a predecessor.
Most jobs pass standard arguments to the method executed by the job. The
arguments are set on the Method tab for each job, and can be modified in most cases.
You can run a job manually (at a time other than the scheduled run time). Note that
a job invoked in this fashion runs when the agent exec process starts the job, not
when you click Run. The agent exec process polls the repository every five minutes,
so the start of the job is delayed up to five minutes, depending on when you clicked
Run and when the agent exec process last polled the repository.
The list page refreshes and the Status column for the job is updated. You may need
to click View > Refresh several times because the job does not run immediately after
you click Tools > Run.
A job can be set to inactive automatically if you have set a maximum number of
executions for the job. In such cases, the job is inactivated after the specified number
of executions are completed. Similarly, if you specify an expiration date for a job, it
is inactivated on that date.
You can change these default parameters, including the polling interval. The
behavior is controlled by command line arguments in the method_verb argument of
the method object that invokes the agent exec process. The arguments can appear in
any order on the command line. You must have Superuser privileges to change the
agent_exec_method.
The agent exec process runs continuously, polling the repository at specified
intervals for jobs to execute. The default polling interval is controlled by the setting
in the database_refresh_interval key in the server.ini file. By default, this is set to 1
minute (60 seconds).
To change the polling interval without affecting the database refresh interval, add
the -override_sleep_duration argument with the desired value to the
agent_exec_method command line. Use Documentum Administrator to add the
argument to the command line. For example:
.\dm_agent_exec -override_sleep_duration 120
By default, the agent exec executes up to three jobs in a polling cycle. You can
configure only to a maximum of 500 jobs in a polling cycle. To change the maximum
number of jobs that can run in a polling cycle, add the -max_concurrent_jobs
argument with the desired value to the agent_exec_method method command line.
For example:
.\dm_agent_exec -max_concurrent_jobs 5
You can load balance at the job scheduling level. By default, this feature is disabled.
Valid values are either 0 (false) or 1 (true). For example:
.\dm_agent_exec -enable_ha_setup 1
The agent exec always sleeps for 30 seconds between launching jobs. This value can
be changed using an argument. By default, the value is 30 seconds. For example:
.\dm_agent_exec -job_launch_interval 10
Tracing for the agent exec process is turned off by default (trace level = 0). To turn it
on, use Documentum Administrator to add the -trace_level argument to the
agent_exec_method method command line. For example:
.\dm_agent_exec -trace_level 1
Setting the trace level to any value except zero turns on full tracing for the process.
Each entry in the file appears on a separate line and has the following format:
server_connect_string,[domain],user_name,password
Using this file is optional. If the dm_run_dependent_jobs method does not find a
matching entry in the file or if the file does not exist, the method attempts to use a
trusted login to invoke the sequenced job.
The value can be any valid value used to designate a repository or server when
connecting to a repository. For example, the following are valid formats for the
values:
repository_name
repository_name.documentum_server_name
repository.documentum_server_name@host_name
The only requirement is that the value you define for the server connection string in
the file must match the value specified in the job_docbase_property for the job in the
sequence.
You can use commas or backslashes in the values specified for the server connection
string, domain, and user name by escaping them with a backslash. For example,
“doe\,john” is interpreted as “doe, john”, and “doe\\susie” is interpreted as “doe
\susie”.
You cannot use a backslash to escape any characters except commas and
backslashes.
The dcf_edit utility allows you to add, remove, or replace entries in a repository
connection file. It also allows you to backup the entire file. The utility is installed
with Documentum Server as a method implemented using a Java class. The class is:
documentum.ecs.docbaseConnectionFile.DCFEdit
You can run the utility as a method from Documentum Administrator or as a Java
command line utility. “dcf_edit utility arguments” on page 460 lists the arguments
accepted by the method or on the command line.
Argument Description
-server_config server_config_name Identifies the repository to which the utility
will connect.
Argument Description
-password user_password The clear-text password for the user
identified in -login_user.
When the utility executes an Add operation, it looks for an entry in the file that
matches the values you provide as arguments for -server_config, -login_user, and -
login_domain. If a match is not found, the utility creates a new entry. If a match is
found, the utility replaces the existing entry with the values in the arguments.
1. Examine the status report of controlling job to determine which jobs failed.
Use Documentum Administrator to view the status report of job. “Interpreting a
job sequence status report” on page 462 describes the entries in the report.
2. Examine the job reports of the failed jobs and the session and repository log files
to determine the cause of the failure.
3. Correct the problem.
4. Run dm_run_dependent_jobs as a method, with the -skip argument, to re-
execute the failed jobs.
“Executing dm_run_dependent_jobs independently” on page 463, describes the
arguments that you can include when you run the method independently of the
job.
Any user can create an alias set. However, only the owner of the alias set or a
superuser can change or delete an alias set. If the server API is used, the constraints
are different:
• To change the owner of an alias set, you must be either the owner of the alias set
or have superuser privileges.
• To change other properties or to delete an alias set, you must be the owner of the
alias set or a user with system administrator or superuser privileges.
Field Description
Name The name of the alias set.
Description A description of the alias set.
Owner The name of the alias set owner.
Property Description
Name The name of the alias.
Category The category to which the alias belongs.
Value The value assigned to the alias.
Description A description of the alias.
You must be the owner of the alias set or have superuser privileges to add, modify,
or delete aliases.
Field Description
Name The name of the alias.
Field Description
Value The value for the category property. Click
Get Value to select a value from a list of
values associated with the category you
previously selected.
Formats
17.1 Formats
Format objects define file formats. Documentum Server only recognizes formats for
which there is a format object in the repository. When a user creates a document, the
format of the document must be a format recognized by the server. If the format is
not recognized by the server, the user cannot save the document into the repository.
The Documentum Server installation process creates a basic set of format objects in
the repository. You can add more format objects, delete objects, or change the
properties of any format object. In Documentum Administrator, all formats are
located on the Administration > Formats node.
Field Value
Name The name of the format (for example: doc,
tiff, or lotmanu).
Default File Extension The DOS file extension to use when copying
a file in the format into the common area,
client local area, or storage.
Description A description of the format.
Com Class ID The class ID (CLSID) recognized by the
Microsoft Windows registry for a content
type.
Mime Type The Multimedia Internet Mail Extension
(MIME) for the content type.
Windows Application The name of the Windows application to
launch when users select a document in the
format represented by the format object.
Macintosh Creator Information used internally for managing
Macintosh resource files.
Macintosh Type Information used internally for managing
Macintosh resource files.
Field Value
Class Identifies the classes or classes of formats to
which a particular format belongs. The class
property works with all search engines.
Assignment policies determine the correct storage area for content files. A new type
inherits a default assignment policy from the nearest supertype in the type hierarchy
that has an active assignment policy associated with it. After the type is created,
associate a different assignment policy with the type.
Properties are stored as columns in a table representing the type in the underlying
RDBMS. However, not all RDBMSs allow you to drop columns from a table.
Consequently, if you delete a property, the corresponding column in the table
representing the type may not actually be removed. In such cases, if you later try to
add a property to the type with the same name as the deleted property, you receive
an error message.
Any changes made to a type apply to all objects of that type, to its subtypes, and to
all objects of any of its subtypes.
Field Value
Info
Type Name The name of the object type. This field is
read-only in modify mode.
Model Type This field is read-only in modify mode.
Options are:
• Standard: This is the default and is for
heavy types.
• Shareable: Defines a shareable SysObject
model type for SysObject supertypes and
their subtypes.
• Lightweight: The system checks for the
existence of shareable types in the current
repository. If there are no shareable types
in the current repository, this option is
not be available. If selected, the Parent
Type, Materialize, and FullText fields
become available and the Super Type
Name field is not displayed.
Field Value
Super Type Name The name of the supertype. The default
supertype is dm_document. This field is:
• Not available if the Model Type is
Lightweight.
• Read-only in modify mode.
Field Value
Enable Indexing The system displays the Enable Indexing
checkbox if:
• The type is dm_sysobject or its subtype
and you are connected as a superuser to a
5.3 SP5 or later repository. If neither of
these conditions is met, the system does
not display the checkbox.
• A type and none of its supertypes are
registered. The system displays the
checkbox cleared and enabled. You can
select the checkbox to register the type for
full-text indexing.
• A type is registered and none of its
supertypes are registered. The system
displays the Enable Indexing checkbox
selected and enabled.
• A supertype of the type is registered for
indexing. The system displays the Enable
Indexing checkbox selected but disabled.
You cannot clear the checkbox.
Field Value
Materialize This option is available only when creating a
type and Model Type is Lightweight. This
field is read-only in modify mode.
You cannot remove system-defined types from the repository. If you delete an object
type with an associated assignment policy, the assignment policy is not removed.
You can delete it manually.
You cannot delete a shareable type that is shared by a lightweight SysObject. Delete
the dependent lightweight objects first.
Use the Assignment Policy Inheritance page to view the assignment policies defined
for a type or to understand policy inheritance and gauge the impact of changes to
any policies. Knowing the inheritance hierarchy helps with troubleshooting if
content files are not saved in the correct storage area for that type.
The page displays a type and its supertypes in descending order, with the type
highest in the type hierarchy at the top of the list. The assignment policy associated
with each type is displayed, if the assignment policy is active. If the selected type
does not have an active assignment policy associated with it, the assignment policy
associated with its immediate supertype is applied. If its immediate supertype does
not have an active assignment policy, the policy associated with the next supertype
in the hierarchy is applied until the SysObject supertype is reached.
Storage management
• Storage
The storage area contains pages for creating various store types, such as file
stores, retention stores (Centera and NetApp SnapLock), blob stores, turbo
stores, Atmos stores, S3-compatible stores, mount point objects, location objects,
and storage plug-ins.
• Assignment Policies
The Assignment Policies area contains pages for creating assignment policies.
These pages only appear if the Content Storage Services feature is enabled for the
repository.
• Migration policies
The Migration Policies are contains pages for creating jobs to move content files
among storage areas based on user-defined rules and schedules. These pages
only appear if the Content Storage Services feature is enabled for the repository.
19.2 Storage
The storage area contains information about existing stores and pages for creating
various store types. To access the storage area, select Administration > Storage
Management > Storage. The Storage list page displays with a list of existing stores
and location objects. If a storage system complies with the NFS or Linux file system
or CIFS (on Microsoft Windows) file system protocols and standards, Documentum
can use this storage system. The first ten storage areas in the repository are
displayed in the order in which they were created.
Storage Description
File Store File stores hold content as files.
Linked Store Linked stores do not contain content, but
point to the actual storage area, which is a
file store.
Storage Description
Blob Store Blob stores store files directly in the
repository in a special table.
Mount Point Mount point objects represent directories
that are mounted by a client.
External File Store External file stores do not store any content.
They point to the actual file system.
External Free Store External free stores do not store any content.
They point to a CD-ROM or user-defined
storage.
Storage Description
OpenStack Swift Store OpenStack Swift store offers cloud storage
software to store and retrieve lots of data
with a simple API. It is optimized for
durability, availability, and concurrency
across the entire data set.
REST Store Documentum Server supports Microsoft
Azure Blob and Google Cloud storage types
as REST object store.
Distributed Store Distributed stores do not contain content, but
point to component storage areas that store
the content.
Field Description
Info
Name The name of the file store. The name must be
unique within the repository.
Description The description of the file store.
Field Description
Location Select the location on the server host.
Caution
Be sure that the storage path you
specify does not point to the same
physical location as any other file
stores. If two file stores use the
same physical location, it may
result in data loss.
Field Description
Status Select a radio button to change the status of
the file store to on line, off line, or read-only.
This field only displays for an existing file
store.
Digital Shredding Select to enable digital shredding.
Caution
The Documentum Administrator
interface for version 6 and later
displays the Digital Shredding
checkbox for all file stores. If the
file store is a component of a
distributed store, files are not
digitally shredded even when it
appears that digital shredding is
enabled for the file store.
Field Description
Content Duplication Select to enable content duplication checking.
This option is only available in repositories
with Content Storage Services enabled.
• When Generate content hash values only
is selected, for each piece of content
checked in to the repository,
Documentum Server calculates the value
needed to determine whether or not it is
duplicate content.
• When Generate content hash values and
check for duplicate content is selected,
for each piece of content checked in to the
repository, Documentum Server
calculates the value needed to determine
whether or not it is duplicate content and
then checks for duplicate content.
Note: Data Domain does not allow creation of files with Alternate Data Stream
(ADS).
• Documentum Server host and NAS storage host should be in same domain.
• Domain Administrator only can install Documentum Server on the Documentum
Server host.
• For Windows: Data directory path is the CIFS share path of the NAS
storage device.
• For Linux: NAS storage device NFS share path is mounted on the Linux
host as root and used as the data directory path.
root$ mount -t nfs <IP address of DD store>
:<shared NFS path><local mount path>
Click Next.
e. Select Yes for the Is this a NAS or SAN device option and click Next.
f. Continue with the options in the Documentum Server Configuration
Program and complete it. OpenText Documentum Platform and Platform
Extensions Installation Guide contains detailed instructions.
After the Documentum Server Configuration Program is complete, the NFS file store
share is created.
Field Value
Name The name of the storage object. This name
must be unique within the repository.
Location The name of the directory containing the
logical link.
Linked Store The name of the storage area to which the
link is pointing.
Use symbolic links If selected, symbolic links are used.
Get Method To install a custom SurrogateGet, click Select
Method and browse to the method on the
server host file system.
You cannot define a blob storage area as the underlying area for a linked store or as
a component of a distributed storage area. That is, blob storage cannot be accessed
through a linked store storage area or through a distributed storage area.
Field Description
Name The name of the storage object. This name
must be unique within the repository and
must conform to the rules governing type
names.
Content Type Valid values are:
• ASCII
• 8-bit Characters
Get Method To install a custom SurrogateGet, click Select
Method and browse to the method on the
server host file system.
Field Value
Name The name of the mount point object.
Caution
Be sure that the combination of the
host and path you specify does not
point to the same physical location
as any other file stores. If two file
stores use the same physical
location, data loss may result.
Field Value
Macintosh Preferred Alias Set to the volume name chosen for the
mounted directory.
External stores require a plug-in that you must create before you create an external
store. The plug-in can run on the server side or client side, although a client-side
plug-in could provide better performance. Documentum provides code for sample
plug-ins in the DM_HOME/unsupported/plugins directory.
Field Description
Info
Name The name of the new external store.
Windows Indicates the plug-in that is used on the
Windows platform.
Solaris Indicates the plug-in that is used on the
Solaris platform.
Aix Indicates the plug-in that is used on the Aix
platform.
HP-UX Indicates the plug-in that is used on the HP-
UX platform.
Macintosh Indicates the plug-in that is used on the
Macintosh platform.
Linux Indicates the plug-in that is used on the
Linux platform.
HP-UX-Itanium Indicates the plug-in that is used on the HP-
UX-Itanium platform.
Current Client Root The name of the location object that
represents the default root of the content for
executing plug-ins on the client when the
mount is not executed. This option is only for
external file stores.
Field Description
Client Root The name of the location object that
represents the default root of the content for
client side plug-in execution when mount is
not executed. The default is NULL. This
option is only available for external file
stores.
19.2.6 Plug-ins
A plug-in is a shared library (on Linux systems) or DLL file (on Windows systems)
for retrieving content when an external store is in use.
You must create the plug-in. Documentum provides code for sample plug-ins in the
DM_HOME/unsupported/plugins directory. The sample plug-ins are examples and
are not supported by Documentum.
The API interface between the shared library or DLL and the server consists of C
functions for the plug-in library.
Field Value
Name The name of the plug-in object.
Hardware Platform Specifies one or more hardware platforms on
which the plug-in can run.
SnapLock requires:
On Linux, mount the SnapLock store to local disk using NFS. Use the mount point
path in the SnapLock store object path. For example:
<snaplockhost:/path/to/use> </local/mount/point>
mount snapdisk:/u01/disk1 /dctm/snapstore
When you create the SnapLock store, use the name of the local mount path, /dctm/
snapstore.
Field Description
Name The name of the NetApp SnapLock store.
Description A description of the NetApp SnapLock store.
Field Description
Plug-in Name The name of the plug-in for the NetApp
SnapLock store. Options are:
• Default Plugin: Select to set a null ID
(0000000000000000) in the a_plugin_id
property of the NetApp SnapLock store
object.
• Select Plugin: Select to use the default
Snaplock Connection plug-in.
When a repository is created, a default
plug-in object is created. If the plug-in
object is deleted from the repository, no
plug-in is displayed. Create a new plug-in
object with the content of the plug-in
object set as follows:
– Windows: %DM_HOME%\bin
\emcplugin.so
Field Description
Fixed Retention Specifies a fixed value for the retention
property value, as follows:
• Choose a retention period:Sets a period
of days as the retention period for all
content in the NetApp SnapLock store.
Enter the date and time of the retention
date.
• Choose default retention days:Select and
then type the number of retention days.
In a Centera store:
• Index content.
• Enable content compression in 5.3 SP1 and later repositories.
Set the C-clip buffer size or configure use of embedded blob storage by using
optional storage parameters. Setting the C-clip buffer size is available only in 5.3 SP3
and later repositories.
Documentum Server supports distributed Centera clusters in 5.3 SP3 and later
repositories. The Centera store plug-in must be stored depending on different server
locations:
• If all Documentum Servers are running on the same computer, the Centera store
plug-in must be in a file store.
• If the Documentum Servers are running on different hosts, the Centera store
plug-in must be stored in a file store that is shared by all Documentum Server
instances or in a distributed store in which each Documentum Server has at least
one component defined as a near store.
Field Description
Name The name of the Centera store.
Description A description of the Centera store.
Field Description
Plug-in Name Specifies the plug-in that is used for the
Centera store. Select one of the following
options:
• Default Plugin: Sets a null ID
(0000000000000000) in the a_plugin_id
property of the Centera store object.
• Select Plugin: Enables the default CSEC
plug-in. To use a plug-in other than the
default CSEC plug-in, click the Select
Plugin link, locate the plug-in in the
repository, select it, and click OK.
Field Description
Fixed Retention Specifies a fixed value for the retention
property value, as follows:
• Choose a retention period:Sets a period
of days as the retention period for all
content in the Centera store. Enter the
date and time of the retention date.
• Choose default retention days:Select and
then type the number of retention days.
1. On the Info tab of the EMC Centera Store Properties, scroll to the Storage
Parameters field and click Edit.
The Storage Parameters page displays.
2. In the Enter new value field, type the connection string for the Centera storage
system.
The connection string has the following format:
<IP_address>|<hostname>{,<IP_address>|<hostname>}?<Centera_profile>
where:
secret=bar,10.241.35.110,10.241.35.111?path=C:\Temp\auth.xml
In the following example, the SDK parses the passed-in connection string with
the resulting elements going in the outConnectionStrings array, as follows:
outConnectionStrings [0] = 10.241.35.27?name=profA,secret=foo
The following rules apply to the syntax of a connection string with multiple
profiles:
• The IP address position must precede the listing of credentials and/or path
for that profile.
• If the connection string includes a path that does not use the path= prefix but
points to a PEA file and a set of credentials, the path must precede the
where:
where size_in_KB is the maximum size in kilobytes of the content that you
want to store as embedded blobs. For example, if you want to store all
content that is 60 KB or smaller as embedded blobs, set the storage
parameter value as:
pool_option:embedded_blob:60
• To change the maximum number of socket connections that the Centera SDK
can establish with the Centera host, enter the following parameter:
pool_option:max_connections:<integer>
Caution
If you have entered multiple parameters, the Centera connection string
must be in the first position.
1. On the Info tab of the EMC Centera Store Properties, scroll to the Content
Attribute section and click Add.
The Content Attribute page displays.
3. Type a description.
4. Click OK.
Documentum Server uses the ACS connector to communicate with the Atmos store.
The ACS module supports storing and retrieving of content through an HTTP
interface.
Field Description
Name The name of the Atmos store. The name must
be unique within the system. The name of an
existing Atmos store cannot be modified.
URL The URL the server uses to communicate
with the Atmos store.
Full Token ID A combination of subtenant ID and a UID
within that subtenant, both in the Atmos
system that is being targeted. The format is
subtenant ID/UID.
Shared Secret The password of the user accessing the
Atmos store.
The Web service uses a combination of the Token ID, and other request headers to
produce a signature that authenticates the user accessing the web service. It uses a
combination of various pieces of the message to validate the identity of the sender,
integrity of the message, and non-repudiation of the action. The Token ID that you
received via e-mail from the portal agent administrator consists of the subtenant ID,
and the UID separated by a slash (/). The subtenant ID is a randomly generated, 32
character alphanumeric string, which is unique to each customer. The UID, however,
is unique to a web-based application. The UID, which appears after the slash, is
comprised of a portion of the customer name, and a randomly generated string. For
example, if the customer name is ACME, then the UID string appends the additional
random characters. The whole Token ID contains 53 uppercase characters including
the slash (for example: 5f8442b515ec402fb4f39ffab8c8179a/
ACME03GF52E8D8E581B5).
To complete the authentication operation, you must generate a signature using the
shared secret, which is associated with the UID. The shared secret is a value
generated by the Storage Utility Agent responsible for managing this application.
The shared secret appears in the same e-mail message that contains the Token ID.
The following sample represents a typical shared secret:
MBqhzSzhZJCQHE9U4RBK9ze3K7U=
• Install the license for ViPR Object on ViPR, including Amazon S3.
• Enable data services for object-based storage services and make sure that the
Amazon S3 store is accessible.
• Create a bucket for data services to be used by the Documentum Server.
Field Description
Name The name of the ViPR store. The name must
be unique within the system. The name of an
existing ViPR store cannot be modified.
URL The URL that the server uses to communicate
with the ViPR store. The URL format is
http://<xx.xx.xx.xx>:9020/<<bucket-name>>.
For example, http://10.31.4.172:9020/csstore.
Access Key ID The user name of the user accessing the ViPR
store. Use the ViPR Tenant Owner as the
Access Key ID.
Shared Secret The password of the user accessing the ViPR
store. Use the Object Access Key as the
Shared Secret.
The ViPR plugin is not provided with Documentum Server from the 21.4 release.
Instead, you can use the S3 store plug-in updated with performance enhancements
to configure ViPR store, as both the plugins use Amazon AWS SDK for the content
transfer operations.
Optionally, you can update all metadata information from the existing ViPR store to
the new S3 store.
Note: You can migrate only the metadata information but not the content.
1. Obtain the details of existing ViPR store object using the following DQL query:
select * from dm_vipr_store
2. Obtain the count of dm_sysobject and dmr_content in the existing ViPR store to
validate the count after migration using the following DQL query:
select count(*) from dmr_content where storage_id='<existing vipr store id>'
select count(*) from dm_sysobject where a_storage_type='<existing vipr store name>'
4. Update all metadata information to refer to the new S3 store object created in
Step 3 using the following API command:
execsql,c,update dm_sysobject_s set a_storage_type='<new s3 store name>' where
a_storage_type='<existing vipr store name>'
execsql,c,update dmr_content_s set storage_id='<new s3 store id>' where
storage_id='<existing vipr store id>'
5. Verify if all objects are updated successfully using the following DQL query:
select count(*) from dmr_content where storage_id='<new s3 store id>'
select count(*) from dm_sysobject where a_storage_type='<new s3 store name>'
6. Compare the results in Step 2 and Step 5 and make sure both are equal.
Notes
Note: The prerequisite for plug-in-enabled stores is that the ACS must always
be running. ACS read/write may be disabled through configurations, but ACS
itself must not be stopped.
The plug-in modules are deployed as BOF modules. The S3 plug-in is available in
the java_access attribute of the dm_store subtype object (dm_s3_store). The plug-
in needs the support of other external libraries. These external libraries are deployed
as Java libraries with the BOF modules. All the deployment rules pertaining to BOF
modules and TBOs apply as is. Also, all the required BOF modules (s3-tbo-api.
jar and s3-tbo-impl.jar) and dependent Java libraries (aws s3 client libs) are
packaged as part of the S3Plugin.dar file.
• Read Content: Calls the readObjectAsStream call of the RESTful API and pulls
the content from the S3 store.
• Write Content: Uses the RESTful API to write the content to S3 with the supplied
metadata. The returned object identifier is passed to the upper layer wrapped as
a StoreResult Object implementation of IStoreWriteResult.
• Delete Content: Calls the S3 Delete Content API.
• Write or Update Metadata: Metadata is passed to the plug-in by DFC. Adds the
content id as metadata to the objects or content that are being pushed to S3. The
metadata is a key value pair. For example, the key is dmr_content_id and the
value is [The Content Object Id].
• java_access: Name of the module (S3 plug-in) that the store object represents.
The default value is S3 Plugin for dm_s3_store type objects.
• base_url: S3 storage service URL. Format is http://<X.X.X.X>/<BUCKET>
where
– BUCKET: Name of the S3 bucket used to push the content. This name is
mandatory. Bucket must be preconfigured in the S3 storage service and
accessible using the following credentials:
After it is configured, the store is accessible as any other Documentum Server store.
A new TBO is created for the plug-in. The dm_s3_store object cannot be updated
after it is created. All the contents uploaded to the Amazon S3 store have an unique
key. This key is derived from the content-id.
Field Description
Name The name of the S3 store. The name must be
unique within the system. The name of an
existing S3 store cannot be modified.
URL The URL that the server uses to communicate
with the S3 store. The URL format is http://
<X.X.X.X>/<<BUCKET>>.
Access Key ID The user name of the user accessing the S3
store. Use the S3 Tenant Owner as the Access
Key ID. This is an optional field.
Shared Secret The password of the user accessing the S3
store. Use the Object Access Key as the
Shared Secret. This is an optional field.
Field Description
Proxy Host The IP address of the proxy server. This is an
optional field. However, you must provide a
value if Documentum Server host is behind
the proxy server and s3 object store in the
public cloud.
Proxy Port The port reserved for the proxy server. This
is an optional field. However, you must
provide a value if Documentum Server host
is behind the proxy server and s3 object store
in the public cloud.
Region The region where S3 bucket is configured.
This is an optional field.
Query Parameter The URL query parameter that the
corresponding S3 store vendor expects for
extending retention. This is specific to the
compatible store. This is an optional field.
Retention Header Name Header used for specifying the new retention
date. The date format must be in accordance
to the ISO 8601 format
(YYYYMMDDThhmmssZ). This is specific to
the compatible store. This is an optional field.
Mode Retention mode that applies to different
levels of protection. This is an optional field
used for the AmazonS3 vendor only. Valid
values are:
• null
• COMPLIANCE
• GOVERNANCE: This is the default value
for Amazon S3.
Vendor Specifies the storage vendor. This is an
optional field. Valid values are:
• null
• AmazonS3
• NetAppStorageGRID
• HCP
• IBMCOS
• EMCECS
• For use of v4 signing for all S3 requests, set the value of enable-v4signing to
true. The default value is false.
• To include MD5 checksum value to verify the integrity of the object, set the value
of enable-md5 to true. The default value is false.
Note: If you set the value lesser than the minimum value, then the
Documentum Server takes the minimum value as 900 seconds. If you set
the value greater than the maximum value, then the Documentum
Server takes the maximum value as 86400 seconds.
19.2.11.4 Logging
Tracing is in line with the DFC tracing activity. However, there are additional log
statements at the DEBUG level that appears in the log. The DEBUG level can be
enabled using log4j2.properties in ACS.
• Windows: %DM_JMS_HOME%\webapps\ACS\WEB-INF\classes\log4j2.
properties
• Linux: $DM_JMS_HOME/webapps/ACS/WEB-INF/classes/log4j2.properties
1. Open log4j2.properties.
log4j.logger.com.documentum.content.store.plugin.s3=DEBUG,ACS_LOG
19.2.11.5 Troubleshooting
The logging and tracing information generated by DFC and ACS is sufficient for
troubleshooting. To verify if store plug-ins for S3 and TBO for S3 store are installed,
run the queries in IAPI to get the expected results.
• To verify if the plug-in module and its dependencies are installed, use the
following query format:
API> ?,c,select r_object_id,object_name from dm_sysobject
where FOLDER('/System/Modules/S3 Plugin',DESCEND)
Result:
r_object_id object_name
---------------- -----------------------
09xxxxxxxxxxxxxx S3 Plugin
0bxxxxxxxxxxxxxx External Interfaces
0bxxxxxxxxxxxxxx Miscellaneous
0bxxxxxxxxxxxxxx S3 Plugin Libraries
09xxxxxxxxxxxxxx aws-java-sdk-s3
09xxxxxxxxxxxxxx commons codec-s3
09xxxxxxxxxxxxxx commons logging-s3
09xxxxxxxxxxxxxx http client-s3
09xxxxxxxxxxxxxx http core-s3
09xxxxxxxxxxxxxx jackson-annotations-s3
09xxxxxxxxxxxxxx jackson-core-s3
09xxxxxxxxxxxxxx jackson-databind-s3
09xxxxxxxxxxxxxx joda-time-s3
(14 rows affected)
• To verify if the TBO for dm_s3_store is installed, use the following query format:
API> ?,c,select r_object_id,object_name from dm_sysobject
where FOLDER('/System/Modules/TBO/dm_s3_store',DESCEND)
Result:
r_object_id object_name
---------------- ------------------------
08xxxxxxxxxxxxxx RuntimeEnvironment.xml
09xxxxxxxxxxxxxx s3-tbo-api
09xxxxxxxxxxxxxx s3_tbo_impl
0bxxxxxxxxxxxxxx External Interfaces
0bxxxxxxxxxxxxxx Miscellaneous
(5 rows affected)
A store plug-in using REST APIs is used to allow Documentum Server to configure
OpenStack Swift as a data store. This plug-in is available as dm_swift_store type in
Documentum Server.
After it is configured, the store is accessible as any other Documentum Server store.
A new TBO is created for the plug-in. All the contents uploaded to the OpenStack
Swift store have an unique key. This key, which is in the form of a Linux path, is
derived from the content-id.
Field Description
Name The name of the OpenStack Swift store. The
name must be unique within the system. The
name of an existing OpenStack Swift store
cannot be modified.
Field Description
Authentication endpoint URL The URL that the server uses to communicate
with the OpenStack Swift store. API version
v2.0 and auth/v1.0 endpoints are supported
for OpenStack Keystone integration and /v1
endpoint is supported for basic
authentication. For example, http://
10.0.0.1:35357/v2.0.
Account Owner The user name of the user accessing the
OpenStack Swift store.
Password The password of the user accessing the
OpenStack Swift store.
Container The name of the container to store the
content.
Region The name of the preferred region to store the
content. This is an optional field.
Note: The push and pull operations do not work with the HTTP mode in
base_url.
• Migration of data to or from REST store: You can use the content migration job to
perform the migration. Migration of data is supported as follows:
is pushed to the REST store, and if de-duplication is enabled, the REST plug-in
does not push content to the store again. However, the REST plug-in references
the endpoint to the content to which it matches and already available. In
addition, the content is deleted only if there are no other sysobjects referenced.
Note: The prerequisite for plug-in-enabled stores is that the ACS must always
be running. ACS read/write may be disabled through configurations, but ACS
itself must not be stopped.
The plug-in modules are deployed as BOF modules. The REST plug-in is available in
the java_access attribute of the dm_store subtype object (dm_rest_store). This
plug-in needs the support of other external libraries. These external libraries are
deployed as Java libraries with the BOF modules. All the deployment rules
pertaining to BOF modules and TBOs apply as is. In addition, all the required BOF
modules (rest-tbo-api.jar and rest-tbo-impl.jar) and dependent Java Spring
Boot libraries (web client, beans, and so on) are packaged as part of the
reststore.dar file.
• Read Content: Calls the readContent BOF module API and pulls the content from
the REST store.
• Write Content: Uses the RESTful API to write the content to Azure with the
supplied metadata. The returned object identifier is passed to the upper layer
wrapped as a StoreResult (implementation of IStoreWriteResult).
• Delete Content: Calls the deleteContent API which makes a delete API call to the
REST store.
• Write or Update Metadata: Metadata is passed to the plug-in by DFC. Adds the
content id as metadata to the object that is being pushed to the REST store. The
metadata is a key value pair. For example, the key is dmr_content_id and the
value is [The Content Object Id].
Note: The page blob and append blob blob types of Azure Blob storage type are
not supported.
• java_access: Name of the module (REST plugin) that the store object
represents. The default value is rest plugin for dm_rest_store type objects.
○ <name of container>: Name of the Azure Blob container used to push the
content. This name is mandatory. The container may or may not be
preconfigured in the Azure Blob Storage’s service and is accessible using
the following credentials:
Note: The authentication file in the JSON format that you export must
be explicitly saved to the local filestore, even if your default store is S3
or Azure Blob or Google Cloud.
After it is configured, the store is accessible as any other Documentum Server store.
A new TBO is created for the plug-in. The dm_rest_store object cannot be updated
after it is created. All the contents uploaded to the REST store using Azure Blob or
Google Cloud storage type have an unique key. This key is derived from the
content-id.
Documentum clients (for example, Documentum Administrator) uses the UCF > ACS
> Store Plugin > REST path to store and retrieve content from the REST store.
Note: If you are upgrading from an earlier version (for example, 20.4), you
must manually copy the values of rest.properties available from %DM_HOME%
\bin (earlier version environment) to rest.properties available in %DM_JMS_
HOME%\webapps\ACS\WEB-INF\classes (upgrade environment).
• For proxy settings, set the appropriate values for the following properties:
• To enable the MD5 checksum value to verify the integrity of the object while
sending requests to the REST store, set the value of enable-md5 to true. The
default value is false.
• When multipart upload is chosen, the part-size reflects the size of each portion of
content that needs to be uploaded to the REST store. Set the value of part-size
as needed. The part-size is considered as megabytes. It is an optional field and
the default value is 5 MB.
• To perform parallel multipart processing in Azure Blob storage type, set the
value of multipart-parallel-processing to true. The default value is false.
Notes
19.2.13.4 Logging
Tracing is in line with the ACS tracing activity. However, there are additional log
statements at the DEBUG level that appears in the log. The DEBUG level can be
enabled using log4j2.properties in ACS.
• Windows: %DM_JMS_HOME%\webapps\acs\WEB-INF\classes\log4j2.
properties
• Linux: $DM_JMS_HOME/webapps/acs/WEB-INF/classes/log4j2.properties
1. Open log4j2.properties.
logger.rest.name = com.documentum.content.store.plugin.rest
logger.rest.level = <DEBUG/INFO/WARN>
logger.rest.additivity = false
logger.rest.appenderRefs = rest
logger.rest.appenderRef.rest.ref = RestFile
4. If you want the information about the call stack during any exception, set the
value of REST_STORE_STACK_TRACE to 1. This records the call stack information
in rest.log.
19.2.13.5 Troubleshooting
The logging and tracing information generated by DFC and ACS is sufficient for
troubleshooting. To verify if the REST plug-in and TBO for the REST store are
installed, run the queries in IAPI to get the expected results.
• To verify if the plug-in module and its dependencies are installed, use the
following query format:
API> ?,c,select r_object_id,object_name from dm_sysobject where FOLDER('/System/
Modules/TBO/dm_rest_store',DESCEND)
• To verify if the TBO for dm_rest_store is installed, use the following query
format:
API> ?,c,select r_object_id,object_name from dm_sysobject where FOLDER('/System/
Modules/rest plugin',DESCEND)
Distributed storage areas are useful when repository users are located in widely
separated locations. You can define a distributed storage area with a component in
each geographic location and set up the appropriate content replication jobs to make
sure that content is current at each location.
Field Description
Info
Name The name of the distributed store. This name
must be unique within the repository and
must conform to the rules governing type
names. This field is read only in modify
mode.
Fetch Content Locally Only Indicates whether the server fetches content
locally or from far stores that are not
available locally.
Field Description
Get Method To install a custom SurrogateGet, click Select
Method to access the Choose a method page
to select a method on the server host file
system.
19.2.15 Locations
The directories that a Documentum Server accesses are defined for the server by
location objects. A location object can represent the location of a file or a directory.
Field Value
Name The name of the location object.
Field Value
Path Specifies the file system or UNC path.
• File System Path: The location of the
directory or file represented by the
location object. The path syntax must be
appropriate for the operating system of
the server host. For example, if the server
is on a Windows NT machine, the
location must be expressed as a Windows
NT path.
• UNC: Indicates the UNC path.
Caution
Be sure that the combination of the
mount point and path you specify
does not point to the same physical
location as any other file store. If
two file stores use location objects
that point to the same physical
location, data loss may result.
You can assign specific retention dates for content stored in a retention-enabled
store. A retention date is the date to which the content file must be retained. If a
retention date is defined for content in the storage system, the file cannot be
removed from the repository until that date. For example, if you set the retention
date for an object to February 15, 2011, the content cannot be removed until that
date.
When a retention date is set for an object, it is set for all renditions associated with
page 0 (zero) of the object. Documentum Server moves the selected object and all of
its associated renditions to a retention-enabled storage area. If there are multiple
retention-enabled storage areas in the repository, you must select the target storage
area. To set a retention date, you must belong to the Webtop administrator role and
have at least WRITE permission on the object, and the Centera or SnapLock store
must be retention-enabled.
You can alternatively assign a retention period for content stored in a retention-
enabled store. A retention period is the amount of time for which the content must
be retained. If a retention period is defined, you cannot remove the file from the
repository until that period has expired. For example, if the retention period is set to
five years and the current date is January 1, 2007, the content file cannot be removed
before January 1, 2012.
• Apply WORM automatically: By default, the documents in the file share location
are set to WORM with a default retention date as defined by the storage
administrator. Only Isilon supports the default retention using SmartLock
containers.
• Control retention using retainers: The retention date on the retainer object is
applied to the files. If the maximum retention date given in the retainer is greater
than the default retention date (if it exists), default retention date will be
overridden. Only fixed retention is supported.
The files with retention and also enabled with WORM cannot be modified or deleted
until the expiration of the retention date.
To view the WORM status of a file or directory, use the following command:
isi worm info --path=<complete path of the file or directory> --verbose
Assignment policies can only be applied to the SysObject type and its subtypes, and
are represented in the repository by persistent objects. A particular object type can
have only one associated assignment policy. When a new content file is added to the
repository, the assignment policy engine determines whether the object type of the
file has an active associated assignment policy. If there is no active assignment
policy for the type, the assignment policy engine determines whether the supertype
of the object has an active associated assignment policy. If there is an active
assignment policy for the file type or a supertype, the system applies the policy and
stores the file accordingly. If no policy is found or if none of the rules match in an
applicable policy, the system uses the default algorithm to determine the correct
storage area. If none of the rules match in the applicable assignment policy, the
policy engine does not further search the type hierarchy.
Assignment policies consist of rules that define the criteria for storing content files in
the correct storage area. There are two types of rules:
• Standard rules
Standard rules determine storage area based only on the object format and
content size. Standard rules can have one to five criteria.
• Custom rules
Custom rules can be based on the values of any standard or custom SysObject
property, provided those values are present before an object is saved. There are
no restrictions on the number of conditions in a custom rule. The properties and
values are specified using methods, such as getString(), getInt(), or
getRepeatingString(). Custom rules follow the Java syntax for any conditional
statements in the rule.
There is no syntactical difference between the two types of rules. During rule
validation, a standard rule is translated into the same syntax used for custom rules.
Assignment policies are applied only to new content files, whether they are primary
content files or renditions. Rules of an assignment policy are applied in the order in
which they are listed within a policy. If a rule is met, the remaining rules are
ignored. To match a rule, all conditions in the rule must be satisfied. An assignment
policy is applied when
Assignment policies are not applied or enforced under the following conditions:
dfc.storagepolicy.ignore.rule.errors=true
If this flag is set to true, the assignment policy engine ignores the faulty rule and
attempts to apply the next rule in the policy.
The assignment policy list page displays a list of all assignment policies in the
current repository.
• Policy name
• A brief description of the policy
• Whether the policy is currently Active or Inactive
• The object types to which the policy applies
Field Description
Name The name and a description for the
assignment policy. The name must be unique
in the repository and can be modified after
the policy is saved.
Description A description of the policy. Optional
property.
Status Specifies whether the assignment policy is
active. The default status is Inactive. Select
Active to enable the policy and automatically
validate the rule syntax. The validation
process does not check whether property
names in the rules are valid.
Field Description
Validate all of the rules defined for this Validates the rules for the policy if the policy
policy is active. The default is selected. If the policy
is created in the active state, the checkbox is
selected and grayed out.
Custom rules follow Java syntax for the conditional statement in the rule. The
following are examples of valid custom rules:
Example Rule 1:
sysObj.getString("owner_name").equals("JSmith") --> filestore_02
Example Rule 2:
sysObj.getString("subject").equals("Policies and Procedures") &&
sysObj.getOwnerName().equals("JSmith") --> filestore_03
Example Rule 3:
sysObj.getString("subject").equals("smith") &&
sysObj.getOwnerName().equals("john") --> filestore_03
Field Description
Name The name of the job. Mandatory property.
Job Type The type of job. Optional property. The
default value is Content.
Trace Level The trace level from 0 (no tracing) to 10 (a
debugging level of tracing).
Designated Server The server on which the migration policy is
run. Select a server from the drop-down list.
The list displays each registered servers. The
default value is Any Running Server.
State Specifies whether the policy is active or
inactive. The default value is Active.
Options
Deactivate on Failure Select to deactivate the job after a run fails to
execute correctly.
Field Description
Run after Update Select to run the job immediately after it was
updated.
Save if Invalid Select to save the job even if it is invalid.
Field Description
Next Run Date and Time Specifies the next start date and time for the
job.
Field Description
Selected Objects
Simple selection Creates a migration rule based on preset
values, such as format, creation date,
modification date, access date, or size.
Field Description
Move objects where Specifies which objects to move. Select one of
the criteria from the drop-down list, as
follows:
• format: Migrates objects of a particular
format. Click Select and then select the
correct format.
• created:Migrates objects according to
creation date.
• modified:Migrates objects according to
their modification date.
• accessed: Migrates objects according to
the date they were last accessed.
• size: Migrates objects according to their
size in bytes. Enter the number of bytes.
Field Description
Renditions to include If you selected Move specified types, select
to migrate Primary or Secondary renditions
or both.
Move options
Target Store The destination storage area to which the
content files migrate. Select a store from the
drop-down list. The list includes file stores
and retention stores (Centera and NetApp
SnapLock).
Batch Size The number of content files to include in a
single transaction during the migration
operation. The default value is 500.
Maximum Count The maximum number of content files to
transfer. To specify an unlimited number of
documents, type a zero [0] or leave the field
blank.
Content Migration Threads The number of internal sessions to use to
execute the migration policy. The default
value is 0, indicating that migration executes
sequentially.
Field Description
Title The title of the object.
Subject The subject of the object.
Keywords Keywords that describe the object. Click Edit
to access the Keywords page. Enter a new
keyword in the Enter new value box and
click Add.
Field Description
Authors The author of the object. Click Edit to access
the Authors page. Type a new author in the
Enter new value box and click Add.
Owner Name The owner of the object. Click Edit to access
the Choose a user page and select an owner.
Show more Click to view more SysObject properties of
the migration policy.
• dmclean
The dmclean utility scans a repository and finds all orphaned content objects
and, optionally, orphaned content files. It generates a script that removes these
objects and files from the repository.
• dmfilescan
The dmfilescan utility scans a specified storage area (or all storage areas) and
finds all orphaned content files. It generates a script that removes these files from
the storage area. The dmfilescan should be used as a backup to dmclean.
• dmextfilescan
The dmextfilescan utility scans a specific S3 or S3-compatible stores for any
content files that do not have associated content objects and generates a report.
There are multiple options to run the dmclean and dmfilescan utilities:
• Documentum Administrator
• An IDfSession.apply() method
Note: Only the superuser can retrieve the path of the orphaned content objects.
In general, the system prevents any user to get access to the user content that
has been orphaned due to version or checkin of the objects.
The following table describes the arguments for the dmclean utility:
Argument Description
-no_note Directs the utility not to remove annotations.
-no_acl Directs the utility not to remove orphaned
ACLs.
-no_content Directs the utility not to remove orphaned
content objects and files.
-no_wf_templates Directs the utility not to remove orphaned
SendToDistribution templates.
-include_ca_store Directs the utility to include orphaned
content objects representing files stored in
Centera or NetApp SnapLock storage.
If you need syntax help, enter the name of the utility with the -h argument at the
operating system prompt. “Running jobs” on page 455 and “Dmclean” on page 389
contain information on running dmclean using Documentum Administrator.
The executable that runs dmclean is launched by a method that is created when you
install Documentum Server. By default, the dmclean executable is assumed to be in
the same directory as the Documentum Server executable. If you moved the server
executable, modify the method_verb property of the method that launches dmclean
to use a full path specification to reference the executable. You must be in the
%DM_HOME%\bin ($DM_HOME/bin) directory when you launch the dmclean
executable.
Removing the content object and content file fails if the storage system does not
allow deletions or if the content retention period has not expired. To find and
remove the repository objects that have expired content, use the
RemoveExpiredRetnObjects administration tool. Then use dmclean with the -
include_ca_store argument to remove the resulting orphaned content files and
content objects.
The dmclean utility can remove a variety of objects in addition to content files and
content objects. These objects include orphaned annotations (note objects), orphaned
ACLs, and unused SendToDistributed workflow templates. These objects are
removed automatically unless you include an argument that constrains the utility
not to remove them. For example, including -no_note and -no_acl arguments directs
dmclean not to remove orphaned annotations and unused ACLs. If you include
multiple constraints in the argument list, use a space as a separator.
The utility automatically saves the output to a document that is stored in the Temp
cabinet. The utility ignores the SAVE argument in the DO_METHOD. The output is
saved to a file even if this argument is set to FALSE. The output is a generated IAPI
script that includes the informational messages generated by the utility.
Where <name> is the name of the repository against which to run the utility and
<init_file_name> is the name of the server.ini file for the server of repository. The
name and init_file_name arguments are required.
As dmclean is executing, it sends its output to standard output. To save the output
(the generated script) to a file, redirect the standard output to a file in the command
line when you execute dmclean.
The dmclean utility can remove a variety of objects in addition to content files and
content objects. These objects include orphaned annotations (note objects), orphaned
The dmfilescan utility should be used as a backup to the dmclean utility. It can be
executed in Documentum Administrator, using the DQL EXECUTE statement, an
IDfSession.apply() method, or from the operating system prompt. On Windows, the
utility generates a batch file (a .bat script). On Linux, the utility generates a Bourne
shell script. The executing syntax and the destination of the output vary, depending
on how the utility is executed.
The executable that runs dmfilescan is launched by a method that is created during
Documentum Server installation. By default, Documentum Server looks for the
dmfilescan executable in the same directory the Documentum Server executable is
stored. If you moved the server executable, modify the method_verb property of the
method that launches dmfilescan to use a full path specification to reference the
executable. You must be in the %DM_HOME%\bin ($DM_HOME/bin) directory
when you launch the dmfilescan executable.
“Running jobs” on page 455 and “Dmfilescan” on page 392 contain the information
on running dmfilescan in Documentum Administrator.
Argument Meaning
-s storage_area_name The name of the storage object that
represents the storage area to clean. If this
argument is not specified, the utility operates
on all storage areas in the repository, except
far stores.
Argument Meaning
-from directory1 Subdirectory in the storage area at which to
begin scanning. The value is a hexadecimal
representation of the repository ID used for
subdirectories of a storage area.
The utility automatically saves the output to a document that is stored in the Temp
cabinet. The utility ignores the SAVE argument of the do_method. The output is
saved to a file even if this argument is set to FALSE. The output is a generated script
that includes the informational messages generated by the utility.
The dmfilescan utility sends the output to standard output. To save the generated
script to a file, redirect the standard output to a file on the command line when you
run the utility.
The two arguments, -docbase_name and -init_file, are required, where name is the
name of the repository that contains the storage area or areas to clean and
init_file_name is the name of Documentum Server server.ini file.
The following example describes a script that was generated on a Windows host:
rem Documentum, Inc.
rem
rem This script is generated by dmfilescan for later verification
rem and/or clean-up. This script is in trace mode by default. To
rem turn off trace mode, remove '-x' in the first line.
rem
rem To see if there are any content objects referencing a this file
rem listed below, use the following query in IDQL:
rem
rem c:> idql <docbase> -U<user> -P<pwd>
rem 1> select r_object_id from dmr_content
rem 2> where storage_id = '<storage_id>' and data_ticket =
<data_ticket>
rem 3> go
rem
rem If there are no rows returned, then this is an orphan file.
rem
rem storage_id = '280003c980000100' and data_ticket = -2147482481
del \dm\dmadmin\data\testora\content_storage_01\000003c9\80\00\04\8f
rem storage_id = '280003c980000100' and data_ticket = -2147482421
del \dm\dmadmin\data\testora\content_storage_01\000003c9\80\00\04\cb
rem storage_id = '280003c980000100' and data_ticket = -2147482211
del \dm\dmadmin\data\testora\content_storage_01\000003c9\80\00\05\9d
. . .
On Windows:
c:> <script_file_name>
On Linux:
% <script_file_name>
Make sure that you have the user permission to execute this file. On Windows, use
File Manager to add appropriate permission to your user account. On Linux, use the
following command:
% chmod ugo+x <script_file_name>
Argument Meaning
-docbase_name docbase name Name of the repository.
-user user name Name of the user to connect to the
repository.
-password user password Password of the user to connect to the
repository.
-s target file store The name of the S3 and S3-compatible store.
-from scan from directory1 Subdirectory in the storage area at which to
begin scanning. The value is a hexadecimal
representation of the repository ID used for
subdirectories of a storage area.
Argument Meaning
-force_delete Removes content files that are younger than
168 hours. If the -force_delete argument is
not included, content files younger than 168
hours are not removed.
-grace_period An integer that defines the grace period for
allowing orphaned content files to remain in
the repository. The default is one week,
expressed in hours. The integer value for this
argument is interpreted as hours.
The dmextfilescan utility sends the progress to standard output and generates a
report named DMExtFilescanDoc.txt at the $DOCUMENTUM/dba/log/<repository
ID>/sysadmin/ location.
Note: If you want to archive fixed data such as email or check images that will
not be changed and must be retained for a fixed period, use the content-
addressed storage option, rather than the archiving capabilities described in
this section. The content-addressed storage option is capable of storing massive
amounts of data with a defined retention period.
3. The Archive tool reads the DM_ARCHIVE event in the inbox and performs the
archiving operation.
When the tool is finished, it sends a completion message to the user requesting
the operation and to the repository operator.
4. If necessary the server calls a DO_METHOD function that moves the dump file
containing the archived file back into the archive directory.
5. After the file is back in the archive directory, the Archive tool performs the
restore operation.
When the tool is finished, it sends a completion message to the user requesting
the operation and to the repository operator.
20.1 Prerequisites
You must have installed the following:
• Documentum Server
• Documentum Administrator
• InfoArchive server
• sample_configuration_Documentum.yml
• sample_eas_documentum.extractor.properties
• sample_connection.properties
Name Description
docbase Specifies the name of the Documentum
repository. The rest of the Documentum
connection parameters must be specified
in dfc.properties available in the
CLASSPATH variable.
producer Specifies the application that generates the
SIP.
password Specifies the password to log in to the
Documentum repository. InfoArchive 4.3
Documentum Connector Guide contains
more information about password
protection and encrypted password.
By default, applications running on the
Documentum Server host are allowed to
make repository connections as the
installation owner without a password.
This is called a trusted login.
If you use the installation owner account,
and run the connector on your
Documentum Server host, leave this field
blank.
sipXslt Specifies the path to an XSL file to be
applied to each SIP file.
pdicontentschema Specifies the content schema that is
written to Preservation Description
Information (PDI) files.
extractTypes Specifies the object type information that
must be extracted.
If the objects belong to multiple object
types (a subtype with its supertypes; for
example, dm_document and
dm_sysobject), all these types are
extracted.
When extracted, the object type
information is stored in the types element
in element in eas_pdi.xml as part of the
generated SIP.
The default value is true.
username Specifies the user name to log in to the
Documentum repository.
holding Specifies the name of the application
where the data that needs to be migrated
exists. For example, Documentum.
sipXsd Specifies the XSD file for verifying each of
the generated SIP file.
Name Description
workingdir Specifies the complete path of the working
directory in which to generate SIPs. If the
directory does not exist, the utility creates
it when executed. For example:
workingdir=C:\\Documentum\\IA
Name Description
extractRelations Specifies the relationship (represented by
dm_relation_type and dm_relation
objects) information associated with
selected objects, that must be extracted.
Specifies to extract information of
relationship (represented by
dm_relation_type and dm_relation
objects) associated with the selected
objects. When extracted, relationship
information is stored in the relations
element in eas_pdi.xml as part of the
generated SIP. The default value is true.
entity Specifies the application name that holds
the migrated and archived data.
extractContents Specifies the information about the content
files associated with the selected objects,
that must be extracted. When extracted,
the content file information is stored in the
contents element in eas_pdi.xml as
part of the generated SIP. The default
value is true.
sipschema Specifies the SIP schema that is written to
the SIP descriptor file.
pdischema Specifies the PDI schema that is written to
the SIP files.
maxObjectsPerSip Specifies the maximum number of objects
in a SIP.
dqlPredicate Specifies the partial DQL query string
after the FROM clause. The string you
specify here is automatically appended to
the preset string "select r_object_id
from" to form the complete DQL query
statement when you execute the utility.
For example, if you set dqlPredicate as
follows:
dm_document where object_name like
'ABCD1%'
Name Description
enableLargeXMLprocessing (Optional) Specifies that large XML files
must be processed. If you set the value to
true, it splits the large XML files in to
chunks. The default value is false.
maxTablesPerSIP Specifies the maximum number of
registered table objects per SIP.
xslt Specifies the path to the XSLT file.
includePDIHashInSIP Specifies that a hash value of each PDI file
is written to a SIP when set to true.
extractFolders Specifies that the information about the
folder (and from all the parent folders of
the folder, if any) of the selected objects,
must be extracted. When extracted, the
folder information is nested in the folders
element in eas_pdi.xml as part of the
generated SIP. The default value is false.
extractACLs Specifies that ACLs must be extracted. The
default value is false.
priority Specifies the ingestion priority of the SIP.
The greater the value, the higher the
priority.
The order in which SIPs are ingested is
determined first by ingestion priority
(higher-priority SIPs are ingested first, and
then by ingestion deadline date (SIPs with
earlier deadlines are ingested first).
extractContainments Specifies the virtual documents must be
extracted. The default value is true.
registeredTables Specifies the DQL query parts after the
FROM clause related to the registered
tables that must be extracted.
shouldDeleteDCTMObjects Specifies if the objects or data that is
migrated and archived in InfoArchive
must be deleted. The default value is
false.
isDebugModeEnabled Specifies if the debug mode is enabled.
The default value is true and it retains
the ZIP file that contains the SIP file
contents in the working directory. If you
do want to retain the ZIP file that contains
the SIP file contents, set the value to
false.
8. Open the connection.properties file and provide the appropriate values for
the variables as described in the following table:
Name Description
ia.server.uri Specifies the URL of InfoArchive services.
ia.server.authentication.gateway Specifies the IP address of the
authentication gateway.
ia.server.authentication.user Specifies the email ID of the user that must
be authenticated.
ia.server.authentication. Specifies the password of the user that
password must be used for authentication.
ia.server.client_id Specifies the client ID.
ia.server.client_secret Specifies the encrypted information of the
client.
1. Create a job.
5. Check the server.log file of Method Server for information about logs.
The job migrates and archives the Documentum application data to InfoArchive.
Note: If you want to run the jobs in parallel, perform the following steps:
4. Assign the argument using the -file_name <name of the new properties
file> command.
For publishing, IDSx must be installed on the computer where the repository is
hosted (the source machine) and on the website host (the target machine). For
replication, the IDSx target software must be installed on all the replication targets.
IDSx uses accelerated data transfer technology for faster file transfer.
Label Information
Initial Publishing Date The date documents were first published
using this configuration.
Refresh Date The date of the last successful full refresh of
the content delivery configuration.
Label Information
Last Increment Date The date of the last successful incremental
publish event for the content delivery
configuration.
Increment Count The number of successful incremental
updates since the initial publish operation or
last full refresh.
Publishing Status Indicates whether the last publishing event
succeeded or failed.
Event Number Unique number generated internally for each
publishing operation.
Field Value
Info
State Select Active to indicate using this content
delivery configuration is active. The default
state is Active.
Field Value
Version Defines which version of the document to
publish. If unspecified, the default is the
CURRENT version.
Field Value
Property Export Settings
Add properties as HTML If selected, the system inserts document properties into
Meta Tags HTML content files as META tags on the target host.
If this setting changes after the initial publishing event for the
configuration, republish using the Full Refresh option.
Export Properties If selected, the system exports a default set of properties for
each published document.
If this setting changes after the initial publishing event for the
configuration, republish using the Full Refresh option.
Include contentless If selected, documents in the publishing folder that without
properties an associated content file are published. Only the properties
associated with the contentless document are published.
Field Value
Content Selection Settings
Formats The content formats to publish. If specified, only documents
with the listed formats are published. If unspecified, all
formats are published.
Field Value
Website Locale Web Publisher only. Replaces the global_locales extra
argument that was added manually to the content delivery
configuration in prior versions.
Select a locale from the drop-down list. If using Web
Publisher and a document exists in more than one translation
in the publishing folder, the locale code indicates which
translation to publish and also points to the Web Publisher
rules that define the second and subsequent choices of
translation to publish.
The drop-down list contains choices only when you are using
Web Publisher and the publishing folder is configured for
multilingual use.
Field Value
Post-Synch Script on Target The name of a script located in the product/bin directory of
target host to be run after publishing occurs. If online
synchronization is enabled, the script runs after online
synchronization takes place. There is a 48-character limit for
information typed into this field.
Ingest Settings
Ingest Select this option if you want to ingest content from the target
to the repository.
Target Ingest Directory Enter the directory path of the target from where the content
will be ingested.
Transfer Authentication Settings
Enable system Select to require a transfer username and password for
authentication on target authentication. Not selected means the transfer username and
password are not required for authentication before a data
transfer occurs.
User Name Identifies the user whose account will be used by the transfer
agent to connect to the target host.
Password The password for the user specified in User Name.
Confirm Password Enter the password again for confirmation.
Domain Identifies the domain of the user specified in User Name.
Field Value
Replication Target Host Settings
Field Value
State You can select one of the following states:
• Active in-transaction: Select this option if
you want a replication target to
participate in a transactional replication.
This is the default value.
• Active not-in-transaction: Select this
option if you want the replication target
to participate in a non transactional
replication.
• Inactive: Select this option if you want a
replication target to go for system
maintenance or out of commission.
Target Host Name Identifies the target host machine to which
documents are published. This is the
replication target host, a host where the
Interactive Delivery Services Accelerated
(IDSx) target software is installed.
Target Port The port number of the host machine of
website to use for connections. This must be
the port designated when the replication
target software was installed.
Target UDP Port The UDP port on the target host which is
used for accelerated file transfer. Unique
UDP port has to be used for every IDSx
configurations, irrespective of using the same
or different IDSx targets.
Connection Type May be Secure or Non-secure. This is the
type of connection used for connections from
the source host to the target host. The default
is Secure.
Target Root Directory The physical directory on the target host
where the transfer agent places the export
data set for publication. Also known as the
webroot.
Field Value
Ingest Select this option if you want to ingest
content from the replication target to the
repository.
Target Ingest Directory Enter the directory path of the source where
the content will be stored.
Transfer Authentication Settings
User Name Enter the user name for the data transfer.
Password Enter the password for the data transfer.
Domain If the target host is on windows, optionally
type in a domain where the transfer user
exists. If you type in a domain, it overrides
domain information provided on the target
host during installation.
addReplicationTarget Adds multiple replication targets.
Note: To get
LDAPSync debug
information, use
method_trace_level 8.
To get verbose debug
information, use
method_trace_level 10.
If use_format_extensions is
set to FALSE, files are
published with the format
extensions defined in the
repository for the format.
If use_format extensions is
set to TRUE and a particular
extension is defined as valid
for a particular format, files
of that format with that
extension are published with
the extension.
If use_format_extensions is
set to TRUE and a particular
extension is not defined as
valid for a particular format,
files of that format with that
extension are published with
the format extensions defined
in the repository for the
format.
format Use with
use_format_extensions key.
Takes the format:
format.format_name=semicol
on-separated_extensions
force_serialized When set to TRUE, single- FALSE
item publishes are performed
serially rather than in
parallel.
For example:
additional_media_properties=
dmr_content.x_range;dmr_cont
ent.z_range
Note: Concurrent
single item publishing
at the same time as
auto replication causes
a considerable load on
the staging target.
Hence, set the extra
argument
lock_exitifbusy_flag
along with
auto_replication to
TRUE. The subsequent
replication process will
consider all the
published batches that
were left unused
during the creation of
the previous
replication batch.
full_refresh_transactional_re The replication process can FALSE
plication start a transactional
replication, even when the
replication batch being
replicated contains a full
refresh publish. Replication
Manager can be enhanced to
start a transactional
replication setting the extra
arguments,
full_refresh_transactional_re
plication and
full_refresh_backup, to TRUE
for full refresh publish.
post_replication_script_on_st Use this script to perform any No default value.
aging_target arbitrary action on the
staging target after
replication is complete on all
replication targets.
The end-to-end tester creates a log file in the repository whether the test fails or
succeeds. View the resulting log file after running the tester. If the test fails, examine
the log file to determine which element of your IDS/IDSx installation is not working.
You can read the file from Documentum Administrator or retrieve it directly from
the repository where IDS/IDSx log files are stored in the /System/Sysadmin/Reports/
Webcache folder.
For example, a document might have the effective label, effective date, and
expiration date properties set as follows:
Setting the effective label of document to REVIEW means the document will be
published on March 16, 2008 and removed from the website on March 26, 2008.
Setting the effective label to APPROVED means the document will be published on
April 10, 2008 and withdrawn on April 10, 2009.
Documents whose effective label does not match the effective label set in the content
delivery configuration are published regardless of the values set for effective date
and expiration date.
Indexing management
22.1 Indexing
A full-text index is an index on the properties and content files associated with
documents or other SysObjects or SysObject subtypes. Full-text indexing enables the
rapid searching and retrieval of text strings within content files and properties.
You must have system administrator or superuser privileges to start, stop, or disable
index agents, start or stop xPlore, and manage queue items. OpenText Documentum
xPlore Administration Guide contains the information on editing the properties of the
index agent configuration object and other full-text configuration objects.
The index agent exports documents from a repository and prepares them for
indexing. A particular index agent runs against only one repository. xPlore creates
full-text indexes and responds to full-text queries from Documentum Server.
Note: If you are using Documentum Server with xPlore, by default, sorting on
attribute is performed by the database on results returned from xPlore. To alter
the behavior, add dm_ft_order_by_enabled to dm_docbase_config:
retrieve,c,dm_docbase_config
append,c,l,r_module_name
dm_ft_order_by_enabled
append,c,l,r_module_mode
save,c,l
reinit,c
This is applicable for sorting results supported in xPlore with the appropriate
index configuration.
An index agent that is disabled cannot be started and is not started automatically
when its ACS server is started. You must enable the index agent before starting it.
“Enabling index agents” on page 585 contains the information on enabling a
disabled index agent. If the status of index agent is Not Responding, examine the
machine on which it is installed and make sure that the software is running.
Caution
Stopping the index agent interrupts full-text indexing operations, including
updates to the index and queries to the index. An index agent that is
stopped does not pick up index queue items or process documents for
indexing.
By default, the list page displays failed queue items. To filter the queue items by
status, choose the appropriate status on the drop-down list:
Queue items that have failed indexing can be resubmitted individually, or all failed
queue items can be resubmitted with one command. OpenText Documentum xPlore
Administration Guide contains the instructions on resubmitting objects for indexing.
A repository can be polled by multiple CTS instances. All CTS instances polling the
repository are displayed in the Content Transformation Services node in
Documentum Administrator.
Use Content Intelligence Services (CIS) to analyze the textual content of documents
and know what the documents are about without having to read them.
By default, CIS analyzes the content of the documents, including the values of the
file properties. You can change the default behavior and have CIS analyze the values
of the Documentum object attributes in addition to, or instead of, the content of the
documents.
Resource management
Users access the MBean resources in the distributed network through a resource
agent (JMX agent) to obtain a list of available MBean resources that they can
manage. The Resource Management node displays a list of the resource agents;
however, only a system administrator can create, delete, or update resource agents.
System administrators can add, delete, and edit resource agents. A resource agent
may require authentication to access its resources (MBeans). Documentum
Administrator will first attempt to authenticate the user using the current session
login information. If that fails, then Documentum Administrator prompts for a
username and password.
• The Resource Properties - Info page displays key information about the resource.
The polling interval defines the frequency to poll the resource for activity. This is
not used in the current release.
• The Resource Properties - Attributes page displays the resource attributes.
Writeable attributes provide an input control to update the attribute value.
Attribute changes will be updated on the resource by clicking the Save
Changesor OK button.
• The Resource Properties - Operations page displays the operations that can be
performed. Selecting an operation displays the operations dialog, which enables
you to enter any required data, perform the operation, and view the results (if
the operation has results).
• The Resource Properties - Notifications page displays the resource notifications
you are subscribed to.
• The Resource Properties - Log page enables you to:
• AcsServerMonitorMBean
• AcsServerConfigMBean
• JmxUserManagementMBean
• IndexAgent
To manually configure:
The number of rows returned by a DQL statement is limited based on the width of
the rows requested. The query results may be truncated. When this happens, a
warning message appears.
Note: Methods entered in the API text field bypass the Documentum
Foundation Classes (DFC) and directly access the Documentum Client
Libraries (DMCL). Therefore, the DFC cannot perform its usual validation of
the methods.
• If you are in Single-Line mode, enter the command and any necessary data
in the Command and Data text boxes.
• If you are in Script Entry mode, type the method and its arguments in the
Commands text box.
4. To display the SQL statement produced by the query, select Show the SQL.
5. Click Execute.
The results are returned.
28.1 Searches
Documentum Administrator offers different ways to search repositories and external
sources. By default, Documentum Administrator provides a simple search and an
advanced search. If Federated Search Services is installed on the Documentum
Server host, administrators also have the option to create search templates and
configure smart navigation.
A full-text search searches the files in default search location that the user is
specified in the search preferences. The search can include several repositories at the
same time and external sources such as external databases, web sources or the
desktop.
When displaying search results, Documentum Administrator displays files with the
most matching words first. If a repository has been indexed for parts of speech,
Documentum Administrator also displays files that include variations of the words.
For example, if a user searches for scanning, Documentum Administrator also looks
for files that contain the words scan, scanned, and scanner.
Syntax Description
Quotation marks To search for an exact word or phrase, type quotation marks around
around a word or the word or phrase.
phrase: “ ”
For a simple search (including the Contains field in an advanced
search), if you do not use quotation marks, Documentum
Administrator displays files that contain both the exact words you
typed as well as variations of the words, such as scanning for the
word scanner.
This option is disabled when searching for more than one word or if
your repository has not been indexed for variations.
Syntax Description
The AND and OR To get results that contain two search terms, type AND between the
operators terms. A term can be a word or quoted phrase.
To get results that contain at least one term, type OR between the
words or the quoted phrases.
You can string together multiple terms with the AND and OR
operators. The AND operator has precedence over the OR operator.
For example, if you type:
knowledge or management and discovery
If you type Documentum OR NOT adapter, you get results that either
contain Documentum (and possibly contain adapter) or that do not
contain adapter. Use this syntax cautiously. It can generate a very
large number of results.
The NOT operator can be used alone at the beginning of the query.
For example, if you type NOT adapter, you get results that do not
contain adapter. Use this syntax cautiously. It can generate a very
large number of results.
Syntax Description
Parentheses around To specify that certain terms must be processed together, use
terms: ( ) parentheses. When using parenthesis, you must type a space before,
and after each parenthesis mark, as shown here: ( management or
discovery )
Notes
• The operators AND, OR, and NOT are reserved words. To search a term that
includes an operator, use quotation marks. For example, if you search for
“hardware and software”, Documentum Administrator returns documents
with that string of three words. If you type hardware, and software without
quotation marks, Documentum Administrator returns all of the documents
that contain both words.
• The operators AND, OR, and NOT are not case-sensitive. For example, for
your convenience, you can type: AND, and, And.
• Indexing: Indexing capabilities can be used to define more precise queries. For
example, wildcards can only be used if the repository is indexed, if not, they are
skipped. If you want to run complex queries, consult the system administrator
for details on the indexing configuration of the repository.
• Relevancy ranking: The system administrator can specify a bonus ranking for
specific sources, add weight for a specific property value or improve the score for
a specific format.
• Presets: The system administrator can define a preset to restrict the list of
available types in the Advanced search page. Presets can be different from one
repository to another. If you select only external sources, the preset of the current
repository applies.
• Customization of the Advanced search page: The Advanced search page can be fully
customized to guide you in running queries. For this reason, all the options
described in this guide may not be available, and other may appear to narrow
and/or condition your queries.
• Maximum number of results: The maximum number of results is defined at two
levels. By default, the maximum number of results, taking all sources together, is
1000 and 350 results per source. However, your system administrator can modify
• Understanding inboxes
• Opening a task or notifications
• Performing a task
• Completing a task
• Accepting a group task
• Rejecting a task
• Delegating a task
• Repeating a task
• Changing availability of tasks
• Understanding work queue tasks
30.1 Workflows
A workflow is an automated process that passes files and instructions between
individuals in sequence to accomplish specific tasks. When users are assigned a
workflow task, the task appears in their Inbox.
Workflows can include automatic tasks that the system performs, such as the
execution of scripts. Automatic tasks allow the integration of workflows and
lifecycles.
You must stop and restart Documentum Server for your change to take effect.
Reinitializing the server does not make the change effective.
Documentum Server for your change to take effect. Reinitializing the server does not
make the change effective.
If the shutdown timeout period is exceeded (or is set to zero), the workflow agent
shuts down immediately. When the workflow agent does not shut down gracefully,
automated tasks in certain states can be corrupted. If tasks are corrupted, restarting
the Documentum Server does not result in resumption of the managed workflows.
The server log file indicates when a graceful shutdown did not occur. Use the
RECOVER_AUTO_TASKS administration method to recover the automated tasks.
If the timeout period is a negative value, Documentum Server waits for the
workflow agent threads to complete the automatic tasks held by workflow agent
workers before shutting down gracefully.
The default system shutdown timeout setting is 120 seconds. “Modifying general
server configuration information” on page 39 contains more information on the
server configuration properties.
• If the system creates automatic tasks when the workflow agent is processing the
previously collected tasks, then the workflow agent ignores the
wf_sleep_interval and collects the next set of automatic activities.
• If the system has not created an automatic task when the workflow agent is
processing the previously collected tasks, then the workflow agent sleeps for the
time specified in wf_sleep_interval.
• If the system creates tasks when the workflow agent is sleeping, then the agent
wakes up immediately and collects the next set of automatic activities.
• To disallow workflow agent to query for workitems from workflows that already
have other workitems assigned.
• To create a logic to skip task assignment if one of them is assigned from the same
workflow.
Stop and restart Documentum Server for the change to take effect. Reinitializing the
server does not make the change effective.
The administrator or manager can use the Work Queue Monitor to view the tasks in
the queue, the name of the processor assigned to the task, the status of the task,
when the task was received, and the current priority of the task.
Role Description
Queue_processor Works on items that are assigned by the
system from one or more work queue
inboxes. Queue processors can request work,
suspend, and unsuspend work, complete
work, and reassign their work to others.
Queue_advance_processor Works on items that are assigned by the
system from one or more work queue
inboxes. Additionally, selects tasks to work
on from one or more work queue inboxes.
Role Description
Queue_manager Monitors work queues, assigns roles to
queues, and assigns users to work on queue
items. Queue managers can reassign, and
suspend tasks.
30.3 Lifecycles
A lifecycle defines a sequence of states a file can encounter. Typically, lifecycles are
designed for documents to describe a review process. For example, a user creates a
document, sends it off to other users for review and approval. The lifecycle defines
the state of the file at each point in the process.
The lifecycle itself is created using Documentum Composer and is deployed in the
repository as part of an application. Documentum Administrator manages the
lifecycles that already exist in a repository. All lifecycle procedures are accessed
through the Tools menu in Documentum Administrator.
30.4 References
OpenText Documentum Administrator User Guide contains the information and
instructions on the following:
• Starting a workflow
• Sending a quickflow
• Viewing workflows
• Pausing a workflow
• Resuming a paused workflow
• Stopping a workflow
• Emailing the workflow supervisor or a workflow performer
• Processing a failed task in a workflow
This chapter provides recommendations and best practices to help you improve the
overall performance of the Documentum platform.
Note: The best practices and/or test results are derived or obtained after testing
the product in our testing environment. Every effort is made to simulate
common customer usage scenarios during performance testing, but actual
performance results will vary due to differences in hardware and software
configurations, data, and other variables.
– Chatty or redundant application code is the most common cause of failure (40
percent in 1995 IEEE study)
– Complex object models
– Poor memory and cache management
• Network
Replication
Replication can be configured either as read/write or read only.
Disaster recovery
Disaster recovery is not the same as high availability. It assumes a total loss of the
production data center. Disaster recovery servers are separate and independent from
the main center. They share no resources, and each has an independent network
infrastructure for WAN replication. Both systems have the capacity to carry out full
normal and emergency workloads. They must be maintained to be compatible.
31.2.1 Concurrency
• Unlike earlier versions of Documentum, the concurrency in Documentum 7.0 is
not limited by the availability of memory on the Documentum Server when a
large number of types were used. With the gains in concurrency seen in
Documentum 7.0 for such use cases, additional physical memory can be added to
get more concurrency. Here is an observation using a low level DFC-based test
dealing with 1000 types of varying hierarchies.
• The bottleneck is more likely the server session limitation for a single
Documentum Server. The maximum number of sessions (that can run type
related operations) on a single Documentum Server is still limited by the
max_concurrent_sessions setting on the server.
• The following table lists some observations on a host based on a Windows 2008
R2 based VM template with 8 GB of RAM:
– For a system configuration with two and four cores, the optimal thread
number is 5X cores.
• There is very little performance impact by extending the group entry size and
hierarchy levels. CPU and memory are not affected by the larger size of LDAP.
From the test results, it can be seen that the LDAP NG synchronization speed
depends only on the number of Documentum Server cores and job working
threads.
The session management enhancements have different levels of gains based on how
sessions are created and used by the client transactions. The following list shows
three classes of session usage:
The performance implications in Documentum 7.0 for the three classes if session
usages are the following:
dfc.session.reuse_limit 100
dfc.connection.reuse_threshold 50
dfc.connection.cleanup_check_window 180
dfc.connection.queue_wait_time 500
dfc.session.max_count 1000
concurrent_sessions 100
In the same test, Documentum 7.0 consumed less system resource than 6.7.
1. The operating system has restriction of only 4096 file descriptors per user.
Edit /etc/security/limits.conf and set the following:
csuser soft nofile 8192
csuser hard nofile 16384
2. The kernel limits the number of user processes per user to 40.
Edit /etc/security/limits.conf and set the following:
csuser soft nproc 2047
Those events which cannot be registered for auditing or notification are noted in
their descriptions.
addDigitalSignature
methods are always
audited. It is not
possible to unregister
this event.
dm_addesignature dm_sysobject Add Electronic Execution of the
Signature addEsignature
method.
addEsignature
methods are always
audited and the
entries are always
signed by
Documentum Server.
It is not possible to
modify this behavior.
dm_addesignature_fa dm_sysobject Add Electronic An attempt to
iled Signature Failed execute the
addEsignature
method failed. These
failures are always
audited. It is not
possible to modify
this behavior.
Registrations for
auditing are always
audited. It is not
possible to unregister
this event.
dm_authenticate dm_user Authenticate User Execution of an
authenticate method.
Executing an
authenticate method
also generates a
dm_connect method
if the authentication
requires a repository
connection.
dm_branch dm_sysobject Branch Version Execution of a branch
dm_retainer method.
dm_checkin dm_sysobject Checkin Object Checking in an
dm_retainer object.
Note: An
authenticate
method may
also establish a
repository
connection,
which will
generate a
dm_connect
event in
addition to the
dm_authenticat
e event.
dm_destroy dm_sysobject Destroy Object Execution of a
dm_acldm_user destroy method.
dm_group
dm_retainer
dm_disassemble dm_sysobject Disassemble Virtual Execution of a
Document disassemble method.
dm_disconnect dm_user Logoff Terminating a
repository
connection.
dm_fetch dm_sysobject Fetch Object Fetching an object
dm_retainer from the repository.
dm_freeze dm_sysobject Freeze Object Execution of a freeze
method.
dm_getfile dm_sysobject View/Export Occurs when a
document is viewed
or exported or when
the a getContent and
getPath method is
executed.
dm_getlogin dm_user Get Login Execution of a
getLoginTicket
method.
dm_insertpart dm_sysobject Insert into Virtual Execution of an
Document insertPart method.
dm_install dm_policy Install Execution of an
dm_process install method.
dm_activity
Note: You
cannot audit a
Kill method
that flushes the
cache.
dm_link dm_sysobject Link To Execution of a link
method.
dm_lock dm_sysobject Lock Object Execution of a lock
method.
dm_logon_failure dm_user Logon Failure User attempts to start
a repository
connection using
invalid credentials.
dm_mark dm_sysobject Add Version Label Execution of a mark
method.
dm_move_content dmr_content Move Content Execution of a
MIGRATE_CONTEN
T administration
method.
dm_prune dm_sysobject Prune Versions Execution of a prune
dm_retainer method.
dm_purgeaudit dm_audittrail Purge Audit Execution of a
dm_audittrail_acl destroy method or
dm_audittrail_group PURGE_AUDIT
administration
method to remove
audit trail entries.
Purging an audittrail
entry is always
audited. It is not
possible to stop
auditing this event.
This event is
generated
automatically for the
full-text user.
dm_remove_dynami dm_group Remove From Execution of an
c_group Dynamic Group IDfSession.removeDy
namicGroups call
dm_removecontent dm_sysobject Remove Content Execution of a
removeContent
method.
dm_removenote dm_sysobject Remove Note Execution of a
dm_process removeNote method.
dm_removepart dm_sysobject Remove from Virtual Execution of a
Document removePart method.
dm_removerendition dm_sysobject Remove Rendition Execution of one of
the removeRendition
methods.
dm_removeretention dm_sysobject Remove Retention Object removed from
control of a retention
policy.
dm_restore dm_sysobject Restore Object Execution of a restore
method.
For checked-
out objects that
are modified by
users in the
dm_escalated_
allow_save_on
_lock group,
Documentum
Server stores
the value
SAVE_ON_
LOCK in the
string_2
attribute.
dm_saveasnew dm_sysobject Copy Object Execution of a
dm_acl saveAsNew method.
dm_retainer
dm_setfile dm_sysobject Set Content Execution of the
following methods:
appendContent,
appendFile, bindFile,
iInsertFile,
insertContent,
setContent, setPath
dm_setoutput dm_process Setoutput Execution of a
Setoutput method.
dm_setretentionstatu dm_retainer Set Retention Status Change to status of a
s retainer object.
Events arising from methods that are specific to workflows are described in
“Workflow events” on page 635.
Note: This
does not
include
dm_validate,
dm_install,
dm_invalidate,
and
dm_uninstall
events, and
dm_changestat
eprocess
events.
dm_abortworkflow dm_process Abort Workflow Aborting a workflow
dm_addattachment dm_process Add Attachment An attachment is
added to a running
workflow or work
item.
dm_addpackage dm_process Add Workflow Execution of an
Package addPackage method
dm_autotransactivity dm_process Automatic Workflow An automatic activity
Activity Transition transition occurs
Note: This
event is not
triggered if the
activity has a
transition type
of prescribed.
It is not possible to
register for this event.
dm_save_workqueue Not applicable Not applicable Generated when a
policy workqueue policy
object is saved by the
Workqueue SBO.
It is not possible to
register for this event.
dm_selectedworkite dm_process Select Workitem A work item is
m acquired.
dm_startworkflow dm_process Start Workflow A workflow is
started.
dm_startedworkitem dm_process Start Workitem A work item is
generated.
dm_suspendedworkq dm_process Suspend Workqueue A task on a
ueuetask Tasks workqueue is
suspended.
dm_unsuspendedwo dm_process Unsuspend A task on a
rkqueuetask Workqueue Tasks workqueue is
resumed.
dm_wf_autodelegate dm_process Auto Delegation Automatic delegation
_failure Failed of an activity failed.
Several API events described in “DFC events” on page 628 are also applicable to
workflows.
• dm_checkin
• dm_destroy
• dm_move_content
• dm_readonlysave
• dm_save
Note: If the server_codepage property is set in the server.ini file, then you
must set that property to UTF-8.
It is recommended that you add only the required locales to the data dictionary.
To add a new locale, use the population script provided by Documentum. To add to
or change the locale information for installed data dictionary locales, you can use:
Only some of the data dictionary information can be set using a population script.
(“Summary of settable data dictionary properties” on page 643, describes the data
dictionary properties that you can set in a population script.) The information that
you cannot set using the population script must be set using the DQL statements.
Additionally, you must use DQL if you want to set data dictionary information that
applies only when an object is in a particular lifecycle state.
“Using DQL statements” on page 653, discusses using DQL to populate the data
dictionary.
You share and map the %DM_HOME%\bin (Windows) of repository machine or mount
the $DM_HOME\bin (Linux) of repository machine (with the appropriate permissions)
onto the corresponding localized machine; and then execute dd_populate.ebs from
the corresponding localized machine. In addition, on Windows, you can specify the
UNC path to %DM_HOME%\bin.
Note: For Russian and Arabic, if the repository machine is running the English
operating system, then you can change the regional or language settings of the
repository machine to one of the corresponding locales and then execute the
dd_populate.ebs on the repository machine.
[NLS_FILES]
<NLS_data_dictionary_data_file_1>
<NLS_data_dictionary_data_file_2>
. . .
<NLS_data_dictionary_data_file_n>
The non-NSL data dictionary files contain the data dictionary information that is not
specific to a locale.
The NLS data dictionary files contain the information that is specific to a locale.
You can include any number of data files in the initialization file. Typically, there is
one file for the non-NLS data and an NLS data file for each locale. You can reference
the data files by name or full path specification. If you reference a data file by name
only, the data file must be stored in the same directory as the population script or it
must be in %DM_HOME%\bin ($DM_HOME/bin).
• Non-NLS
• NLS
In the non-NLS data files, the information you define is applied to all locales. In
these files, you define the information that is independent of where the user is
located. For example, you would set the is_searchable or ignore_immutable setting
of the property in a non-NLS-sensitive file.
In an NLS data file, all the information is assumed to apply to one locale. In these
files, you identify the locale at the beginning of the file and only the dd type info and
dd attr info objects for that locale are created or changed. For example, if you set the
label text for a new property in the German locale, the system sets label_text in the
dd attr info object of new property in the German locale. It will not set label_text in
the dd attr info objects of property for other locales.
If you do set NLS information in a non-NLS data file, the server sets the NLS
information in the current locale of the session.
If you include multiple NLS files that identify the same locale, any duplicate
information in them overwrites the information in the previous file. For example, if
you include two NLS data files that identify Germany (de) as the locale and each sets
label_text for properties, the label text in the second file overwrites the label text in
the first file. Unduplicated information is not overwritten. For example, if one
German file sets information for the document type and one sets information for a
custom type called proposals, no information is overwritten.
To specify the locale in an NLS file, place the following key phrases as the first line
in the file:
• LOCALE=<locale_name>CODEPAGE=<codepage_name>
<codepage_name> is the name of the code page appropriate for the specified locale.
Table B-1: Locale-based configuration settings for the data dictionary files
installed with Documentum Server
On Linux: EUC-JP
Korean ko EUC-KR EUC-KR
Portuguese pt ISO-8859-1 ISO-8859-1
(Brazilian)
Russian ru Windows-1251 Windows-1251
Spanish es ISO-8859-1 ISO-8859-1
Swedish sv ISO-8859-1 ISO-8859-1
Hebrew iw ISO-8859-8 ISO-8859-8
TYPE=<type_name>
<data_dictionary_property_settings>
If you want to define information for multiple properties of one object type, specify
the type only once and then list the properties and the settings:
TYPE=<type_name>property=<property_name>
<data_dictionary_property_settings>
{property=<property_name>
<data_dictionary_property_settings>
The setting for one data dictionary property settings must fit on one line.
“Summary of settable data dictionary properties” on page 643 lists the data
dictionary properties that you can set in data files. “Examples of data file entries”
on page 649 shows some examples of how these are used in a data file.
“Settable data dictionary properties using population script” on page 643, lists and
briefly describes the data dictionary properties that you can set using a population
script.
Default_search_
arg, if set, must
be set to one of
the
allowed_search_
ops.
Including
REPORT
(not_null_msg)
or ENFORCE
(not_null_enf) is
optional. If you
include both,
REPORT must
appear first.
COMMENT is
optional.
VALUE and
DISPLAY must
be set. Set each
once for each
value that you
want to map.
“Examples of
data file entries”
on page 649
contains an
example.
Including
REPORT
(primary_key_m
sg) or ENFORCE
(primary_key_en
f) is optional. If
you include
both, REPORT
must appear
first.
Including
REPORT
(unique_key_ms
g) or ENFORCE
(unique_key_enf
) is optional. If
you include
both, REPORT
must appear
first.
At the property
level:
FOREIGN_KEY=
<type>(<attr>)
REPORT=<string
>ENFORCE=<str
ing>
If you set the key at the type level, you must identify which property of the type
participates in the foreign key and which property in another type is referenced by
the foreign key. The key phrase FOREIGN_KEY defines the property in the object
type that participates in the foreign key. The keyword REFERENCES defines the
property which is referenced by the foreign key. For example, suppose a data file
contains the following lines:
TYPE=personnel_action_doc
FOREIGN_KEY=user_name
REFERENCES=dm_user(user_name)
These lines define a foreign key for the personnel_action_doc subtype. The key says
that a value in personnel_action_doc.user_name must match a value in
dm_user.user_name.
To define the same foreign key at the property level, the data file would include the
following lines:
TYPE=personnel_action_doc
property=user_name
FOREIGN_KEY=dm_user(user_name)
REPORT and ENFORCE are optional. If you include both, REPORT must appear
before ENFORCE. Valid values for ENFORCE are:
• application
• disable
This excerpt is from a data file that defines data dictionary information for the
English locale.
#This sets the locale to English.
LOCALE = en
CODEPAGE = ISO_8859-1
property = resolve_pkg_name
LABEL_TEXT = Alias Resolution Package
This final example sets some constraints and search operators for the object types
and their properties.
TYPE = TypeC
PRIMARY_KEY = Attr2, Attr3
REPORT = The primary key constraint was not met.
ENFORCE = application
UNIQUE_KEY = Attr2, Attr3
REPORT = The unique key constraint was not met.
ENFORCE = application
FOREIGN_KEY = Attr1
REFERENCES = TypeA(Attr1)
REPORT = TypeC:Attr1 has a key relationship with TypeA:Attr1
ENFORCE = disable
IS_SEARCHABLE = True
TYPE = TypeC
property = Attr1
ALLOWED_SEARCH_OPS = =,<,>,<=,>=,<>, NOT, CONTAINS, DOES NOT CONTAIN
DEFAULT_SEARCH_OP = CONTAINS
DEFAULT_SEARCH_ARG = 3
TYPE = TypeD
LIFE_CYCLE = 3
PRIMARY_KEY = Attr1
REPORT = Attr1 is a primary key.
ENFORCE = disable
LABEL_TEXT = label t TypeD
HELP_TEXT = help TypeD
COMMENT_TEXT = com TypeD
IS_SEARCHABLE = True
UNIQUE_KEY = Attr1
REPORT = Attr1 is a unique key.
ENFORCE = application
FOREIGN_KEY = Attr1
REFERENCES = TypeC(Attr1)
REPORT = Attr1 has a foreign key relationship.
TYPE = TypeE
property = Attr1
IGNORE_IMMUTABLE = True
NOT_NULL
ENFORCE = application
ALLOWED_SEARCH_OPS = CONTAINS, DOES NOT CONTAIN
DEFAULT_SEARCH_OP = CONTAINS
DEFAULT_SEARCH_ARG = 22
READ_ONLY = True
IS_REQUIRED = True
IS_HIDDEN = True
LABEL_TEXT = property 1
HELP_TEXT = This property identifies the age of the user.
COMMENT_TEXT = You must provide a value for this property.
IS_SEARCHABLE = False
UNIQUE_KEY
FOREIGN_KEY = TypeB(Attr1)
REPORT = This has a foreign key relationship with TypeB:Attr1
ENFORCE = application
FOREIGN_KEY = TypeC(Attr2)
REPORT = This has a foreign key relationship with TypeC:Attr2
ENFORCE = application
• dd_populate.ebs
dd_populate.ebs is the script that reads the initialization file.
• data_dictionary.ini
data_dictionary.ini is the default initialization file.
• data files
The data files contain the default data dictionary information for Documentum
object types and properties. They are located in %DM_HOME%\bin
($DM_HOME/bin). There is a file for each supported locale. The files are named
with the following format:
data_dictionary_<localename>.txt
where <localename> is the name of the locale. For example, the data file for the
German locale is named data_dictionary_de.txt.
The DQL statements allow you to publish the new or changed information
immediately, as part of the operations of statement. If you choose not to publish
immediately, the change is published the next time the Data Dictionary Publisher
job executes. You can also use the publish_dd method to publish the information.
“Publishing the data dictionary information” on page 653 contains more
information on publishing.
OpenText Documentum Server DQL Reference Guide contains the details on using these
statements.
Publishing can occur automatically, through the operations of the Data Dictionary
Publisher job, or on request, using the publishDataDictionary method or the
PUBLISH keyword.
The job uses the value of the resync_needed property of dd common info objects to
determine which data dictionary objects need to be republished. If the property is set
to TRUE, the job automatically publishes the data dictionary information for the
object type or property represented by the dd common info object.
To determine what to publish for the first time, the job queries three views:
• dm_resync_dd_type_info, which contains information for all object types that are
not published
• dm_dd_policies_for_type, which contains information for all object types that
have lifecycle overrides defined
The Data Dictionary Publisher job is installed with the Documentum tool suite.
For example, if you include PUBLISH when you create a new object type, the server
immediately publishes the data dictionary for the new object type. If you include
PUBLISH in an ALTER TYPE statement, the server immediately publishes the
information for the altered object type or property and any other pending changes
for that type or property.
If you do not include PUBLISH, the information is published the next time one of
the following occurs:
• The Publish_dd method is run to update the entire dictionary or just that object
type or property
• The Data Dictionary Publisher job runs.
• An API method is executed to obtain data dictionary information about the
changed object type or property.
The scripts must be run as the Documentum Server installation owner. Each script
returns success (the monitored process is running) as 0 or failure (the monitored
process is stopped) as a non-zero value.
“Monitoring scripts” on page 655, lists the processes that have monitoring scripts,
the script names, and their locations and command-line syntax.
or
$DM_HOME/bin/dmqdocbroker -
t <host_name> -c ping
Methods of r_object_id is
displayed.
get,c,l,method_verb
./dm_agent_exec (Linux)
or .\dm_agent_exec.exe
(Windows)
set,c,l,method_verb
./dm_agent_exec -
enable_ha_setup 1
(Linux)
or .\dm_agent_exec.exe -
enable_ha_setup 1
(Windows)
save,c,l
2. Kill the existing
dm_agent_exec process.
On Linux, use the kill
command and on
Windows, use the Task
Manager processes tab.
dm_agent_exec restarts
after a minute.
• Description
• Severity level
The consistency checker script reports inconsistencies as a warning or an error.
E.2 dm_close_all
The dm_close_all function is called by the plug-in when a session is terminated. The
function is called to let the plug-in library cleanup any internal data structure(s) for
the specified session.
Argument Description
session Identifies the terminated session.
E.3 dm_close_content
The dm_close_content function is called by the plug-in to perform internal cleanup.
This function is called after the read operation for the supplied handle is completed
or or if the read operation is interrupted. The function returns a boolean value.
The following table describes the argument for the dm_close_content function:
Argument Description
handle Identifies the read request.
E.4 dm_deinit_content
The dm_deinit_content function performs global internal cleanup operations. The
function is called just before the server or client unloads the plug-in library, to let the
plug-in perform any global internal cleanup operations.
E.5 dm_init_content
The dm_init_content function initializes internal data structures. This is the first
function called by Documentum Server after the plug-in library is loaded.
The following table describes the arguments for the dm_init_content function:
Argument Description
maxsession Contains the maximum number of
concurrent sessions.
mode Indicates who is invoking the plug-in. The
only valid value is 0, indicating the server.
The function returns a positive value for successful initialization. A negative return
value forces the server to unload the plug-in library. This function should return a
positive value when called multiple times within the same address space.
E.6 dm_open_content
The dm_open_content function retrieves content.
The following table describes the arguments for the dm_open_content function:
Argument Description
session Indicates the session that needs to retrieve
the content.
other_args Indicates the <other_args> supplied when
executing a Mount method. NULL for the
dm_extern_url, dm_extern_free storage
types and when the token specified in a
Setpath operation is an absolute path.
token Is the path-translated token for which to
retrieve the content.
store_object_id Indicates the external storage object ID.
callback Is a function pointer that can be called by the
plug-in library to retrieve an attribute value
for the supplied external storage object ID.
handle Identifies the read request. Filled in on
initiaization and passed for subsequent read
operations.
errcode Contains error code in case of failure.
The plug-in DLL or shared library returns a positive value for successful
initialization and fills in a value for the handle. For subsequent read operations for
this token, the handle value is passed. In case of failure, the plug-in fills in an error
code in errcode.
This function is called when the server or client needs to retrieve the content for the
token.
The handle enables the plug-in to be a multi-threaded application with each thread
servicing a particular read request, with a dispatching mechanism based on the
handle value. For example, for dm_extern_file store objects, <other_args> is the root
path.
For client side plug-in configurations, if the Mount method has not been issued, the
<other_args> parameter is a pointer to the directory location represented by the
def_client_root attribute.
The call back function (which is part of server and client) is of the form:
char *dmPluginCallback (long <session>, char *<store_object_id>, char *attr_name, int
<position>)
The call back function returns an attribute value in string form. The value for the
position parameter should be zero when requesting an attribute value for an single-
valued attribute and should be zero or greater for multi-valued attributes.
When this callback function is called for DM_TIME datatype attribute values, the
returned string format is <mm/dd/yyyy hh:mi:ss>.
Advanced plug-ins may start performing the actual read asynchronously and start
caching the content for performance reasons.
E.7 dm_plugin_version
The dm_plugin_version function enables backward compatibility for enhancement
in future releases. This function is called once immediately after the plug-in is
loaded into the process address space. The plug-in protocol version is 1.0. Therefore,
the plug-in must set major to 1 and minor to 0.
The following table describes the arguments for the dm_plugin_version function:
Argument Description
major The major version number of the plug-in.
The value of this argument must be 1.
minor The minor version number of the plug-in.
The value of this argument must be set to 0.
E.8 dm_read_content
The dm_read_content function requires the plug-in to return the content data into
the location pointed to by buffer supplied.
The following table describes the arguments for the dm_read_content function. The
function returns the number of bytes read and filled into buffer. The plug-in must
maintain its own internal bookkeeping to start reading from the next byte after the
previous read.
Argument Description
handle Identifies the read request.
buffer Contains the location to return the content
data; filled with zeros when there is no more
content to be read or end-of-file is reached.
buffer_size Contains the buffer size (16k).
more_data When positive, indicates more reading is
required.
errcode Contains error code in case of failure.
When a user logs into a Documentum application, that application, in turn, logs into
Documentum Server with the credentials of the user. Therefore, one login to an
application that supports usage tracking creates or modifies two lines in the table,
one line for the application login and one line for the Documentum Server login. If
you use an application that does not support usage tracking, only the Documentum
Server line is created or modified.
For example, if a user logs in to Webtop, there are two lines modified in the global
registry. One line shows the Webtop login and one line shows the Documentum
Server login, since Webtop supports usage tracking. If a user logs in to Documentum
Server using IDQL, there is only one line modified to show the Documentum Server
login, because IDQL does not support usage tracking.
However, if these accounts are used to log in to an application, the application login
is recorded. For example, if the installation owner account is used to log into
Webtop, there is a record of the Webtop login but no record of a Documentum
Server login.
When Documentum Server is set up in domain mode, the user name stored for
usage tracking includes the user domain as well as the user login name. For
example, user1 in the marketing domain is stored as marketing\user1. If there is a
login name user1 in the engineering domain, a login for this user is stored as
engineering\user1. When usage tracking counts unique users, it counts
marketing\user1 and engineering\user1 as two distinct users. Depending on
your environment, marketing\user1 and engineering\user1 could be the same or
two different individuals.
It is also possible that some of the Documentum Servers using that global registry
for usage tracking do not use domain mode, so the user name is stored without a
domain. In this case, there can be entries for user1, engineering\user1, and
marketing\user1.
You can export the usage tracking data from a global registry if you need to examine
the individual entries and verify the unique user counts.
Specific login times for a user can be obtained from the user object on the
Documentum Server that the user logged into. That particular Documentum Server
is not necessarily the global registry that is used for usage tracking.
Usage tracking reports are provided on an “AS IS” basis for informational purposes
only. Inaccuracies can arise due to a number of factors such as inconsistencies in the
setup of the global registry, access rights, customizations, availability of usage
information within the software asset, and other similar issues. This can result in the
collection and reporting of erroneous information. We shall not be liable for any
errors or omissions in the data, its collection and/or reporting by usage tracking.
Regardless of the information provided by usage tracking, customers remain fully
liable for utilizing the affected software in full compliance with the authorizing
terms and quantities specified in the contract(s), quotes and purchase orders that
governed the customer procurement of the software.