Admin Guide
Admin Guide
ClaimCenter ™
Contents
Part 1
Application Administration
1 Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
ClaimCenter Default System Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
Default Owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
Super User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
System User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Change the Unrestricted User. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Minimum and Maximum Password Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Part 2
Server Administration
Contents 3
System Administration Guide 9.0.5
4
System Administration Guide 9.0.5
6
System Administration Guide 9.0.5
Part 3
Server Clustering Administration
7 Understanding ClaimCenter Server Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Cluster Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Guidewire ClaimCenter Cluster Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Cluster Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Configuring Cluster Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Cluster Plugin Parameter Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Cluster Plugin System Properties Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Cache eviction messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Logging cluster plugin parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Server Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
batch Server Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
messaging Server Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
scheduler Server Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
startable Server Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
workqueue Server Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
ui Server Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Example ClaimCenter Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Cluster Member Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Cluster Member Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
User Interface Cluster Member Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Non-ui Role Cluster Member Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Part 4
Security Administration
11 Managing Secure Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
ClaimCenter and the Transport Layer Security Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
ClaimCenter and Secure Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
The ClaimCenter Connection Address. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Restricting access to a ClaimCenter screen by server mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8
System Administration Guide 9.0.5
Part 5
Database Administration
15 Database Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Accessing the Database Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
The Database Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
The Database autoupgrade Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
The databasestatistics Database Configuration Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
The dbcp-connection-pool Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
The reset-tool-params Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
The jndi-connection-pool Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Configuring JNDI Connection Initialization for Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
The loader Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
The callback Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
The loader-table Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
The oracle-settings Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
The sqlserver-settings Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
The upgrade Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
The mssql-db-ddl Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
The ora-db-ddl Database Configuration Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
The versiontriggers Database Configuration Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Part 6
Business Rules Administration
21 Administering Business Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Business Rules in Guidewire ClaimCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Business Rule Roles and Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Business Rule Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Business Rule Production Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Business Rule Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Rules for Deleting a Business Rule Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Business Rule Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Business Rule State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Business Rule Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Business Rule Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Invalid Business Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Contents 11
System Administration Guide 9.0.5
Part 7
Administration Tools
23 Server Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Accessing the Server Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Batch Process Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Processes Table Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Chart and History Tabs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Download a Batch Process History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Work Queue Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Work Queue Table Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Item Statistics Tabs and Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Work Queue Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
The Work Queue Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Download a Work Queue Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
The Work Queue Raw Data Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Download the Work Queue Raw Data Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
The Work Queue History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Download the Work Queue History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
About Work Queue Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Set Log Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
View Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Info Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Archive Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Domain Graph Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Consistency Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Database Table Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Database Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Database Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Data Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Database Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Oracle Statspack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Oracle AWR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Oracle AWR Unused Indexes Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
12
System Administration Guide 9.0.5
14
System Administration Guide 9.0.5
15
System Administration Guide 9.0.5
Document Purpose
InsuranceSuite Guide If you are new to Guidewire InsuranceSuite applications, read the InsuranceSuite Guide for
information on the architecture of Guidewire InsuranceSuite and application integrations. The
intended readers are everyone who works with Guidewire applications.
Application Guide If you are new to ClaimCenter or want to understand a feature, read the Application Guide. This guide
describes features from a business perspective and provides links to other books as needed. The
intended readers are everyone who works with ClaimCenter.
Database Upgrade Guide Describes the overall ClaimCenter upgrade process, and describes how to upgrade your ClaimCenter
database from a previous major version. The intended readers are system administrators and
implementation engineers who must merge base application changes into existing ClaimCenter
application extensions and integrations.
Configuration Upgrade Guide Describes the overall ClaimCenter upgrade process, and describes how to upgrade your ClaimCenter
configuration from a previous major version. The intended readers are system administrators and
implementation engineers who must merge base application changes into existing ClaimCenter
application extensions and integrations. The Configuration Upgrade Guide is published with the
Upgrade Tools and is available from the Guidewire Community.
New and Changed Guide Describes new features and changes from prior ClaimCenter versions. Intended readers are business
users and system administrators who want an overview of new features and changes to features.
Consult the “Release Notes Archive” part of this document for changes in prior maintenance releases.
Installation Guide Describes how to install ClaimCenter. The intended readers are everyone who installs the application
for development or for production.
System Administration Guide Describes how to manage a ClaimCenter system. The intended readers are system administrators
responsible for managing security, backups, logging, importing user data, or application monitoring.
Configuration Guide The primary reference for configuring initial implementation, data model extensions, and user
interface (PCF) files. The intended readers are all IT staff and configuration engineers.
PCF Reference Guide Describes ClaimCenter PCF widgets and attributes. The intended readers are configuration engineers.
Data Dictionary Describes the ClaimCenter data model, including configuration extensions. The dictionary can be
generated at any time to reflect the current ClaimCenter configuration. The intended readers are
configuration engineers.
Security Dictionary Describes all security permissions, roles, and the relationships among them. The dictionary can be
generated at any time to reflect the current ClaimCenter configuration. The intended readers are
configuration engineers.
Globalization Guide Describes how to configure ClaimCenter for a global environment. Covers globalization topics such as
global regions, languages, date and number formats, names, currencies, addresses, and phone
numbers. The intended readers are configuration engineers who localize ClaimCenter.
Document Purpose
Rules Guide Describes business rule methodology and the rule sets in ClaimCenter Studio. The intended readers
are business analysts who define business processes, as well as programmers who write business
rules in Gosu.
Contact Management Guide Describes how to configure Guidewire InsuranceSuite applications to integrate with ContactManager
and how to manage client and vendor contacts in a single system of record. The intended readers are
ClaimCenter implementation engineers and ContactManager administrators.
Best Practices Guide A reference of recommended design patterns for data model extensions, user interface, business
rules, and Gosu programming. The intended readers are configuration engineers.
Integration Guide Describes the integration architecture, concepts, and procedures for integrating ClaimCenter with
external systems and extending application behavior with custom programming code. The intended
readers are system architects and the integration programmers who write web services code or
plugin code in Gosu or Java.
Java API Reference Javadoc-style reference of ClaimCenter Java plugin interfaces, entity fields, and other utility classes.
The intended readers are system architects and integration programmers.
Gosu Reference Guide Describes the Gosu programming language. The intended readers are anyone who uses the Gosu
language, including for rules and PCF configuration.
Gosu API Reference Javadoc-style reference of ClaimCenter Gosu classes and properties. The reference can be generated
at any time to reflect the current ClaimCenter configuration. The intended readers are configuration
engineers, system architects, and integration programmers.
Glossary Defines industry terminology and technical terms in Guidewire documentation. The intended readers
are everyone who works with Guidewire applications.
narrow bold The name of a user interface element, such Click Submit.
as a button name, a menu item name, or a
tab name.
monospace Code examples, computer output, class and The getName method of the IDoStuff API returns the name of the
method names, URLs, parameter names, object.
string literals, and other objects that might
appear in programming code.
monospace italic Variable placeholder text within code Run the startServer server_name command.
examples, command examples, file paths, Navigate to http://server_name/index.html.
and URLs.
Support
For assistance, visit the Guidewire Community.
About ClaimCenter documentation 17
System Administration Guide 9.0.5
Guidewire Customers
https://community.guidewire.com
Guidewire Partners
https://partner.guidewire.com
Application Administration
System Administration Guide 9.0.5
chapter 1
Managing Users
This topic discusses the default system users that Guidewire provides in the base ClaimCenter configuration.
Default Owner
Default system user defaultowner has the following characteristics.
User defaultowner has first name Default and last name Owner. In the base configuration, ClaimCenter does not
assign roles to this user.
If ClaimCenter cannot assign certain business objects to a specific user, ClaimCenter assigns that object to user
defaultowner. ClaimCenter performs this assignment internally.
Managing Users 21
System Administration Guide 9.0.5
Guidewire recommends, as a business practice, that someone in the organization periodically search for outstanding
work assigned to user defaultowner. If the search finds one of these assignments, the searcher must reassign these
items to a proper owner. Guidewire also recommends that the rule administrator investigate why ClaimCenter did
not assign an item of that type and correct any errors in the rules.
Super User
Default system user su has the following characteristics.
User su has first name Super and last name User. In the base configuration, ClaimCenter assigns the Superuser role
to the su user. The Superuser role has all permissions. Thus, user su has unrestricted access to the entire
ClaimCenter application.
IMPORTANT Guidewire strongly recommends that you change the Super User password from its default value.
System User
Default system user sys has the following characteristics.
User sys has first name System and last name User. In the base configuration, ClaimCenter does not assign roles to
this user.
ClaimCenter requires user sys to exist. ClaimCenter uses this user to perform automated work such as running batch
processing, messaging polling, and server startup. Each time ClaimCenter needs to do such work, it creates a session
with the sys user. This is why there might appear to be many sessions with the sys user. Session in this sense is not
a web session. Rather, it represents the authentication of a user.
WARNING Do not rename or delete the sys user. Deleting or renaming this user disables ClaimCenter.
Procedure
1. In the Studio Project window, expand configuration→config:
2. Open file config.xml for editing.
3. Set configuration parameter UnrestrictedUserName to the user name of the new unrestricted user:
Application Logging
ClaimCenter creates automatic logs of many actions by users and operations by the server.
Logger Logical file name. It is possible to configure each logger independently to log information at a certain level.
Appender Output point (destination) for a logger. This can be, for example, the application console or a specific logging file.
Layout Log entry formatting instructions. Each logger category can have its own layout format.
These logging component types work together to log messages according to the message type and severity level.
These components also define the format and the output destination for the various logging categories.
For more information on slf4j, see the following web site:
http://slf4j.org/index.html
For more information on Apache log4j loggers, appenders, and layouts, see the following web site:
http://logging.apache.org/log4j/1.2/manual.html
See also
• “The Logging Properties File” on page 26
• “Logging Category Reference” on page 31
• “Formatting a Log Message” on page 34
Application Logging 25
System Administration Guide 9.0.5
This setting:
• Instructs ClaimCenter to send system-wide informational messages to two output points: the ClaimCenter
console (Console) and a log file (with name DailyFileLog in the default configuration).
• Sets the default logging level to INFO.
Within file logging.properties, entries such as log4j.appender.* indicate the parameters of each output point.
These entries identify properties such as location or output format options. As ClaimCenter starts, it attempts to
write a log file in the location specified by log4j.appender.DailyFileLog.File. By default, ClaimCenter writes
to the following log file:
tmp/gwlogs/ClaimCenter/logs/cclog.log
ClaimCenter creates the log file automatically. However, if the directory specified by DailyFileLog.File does not
exist, ClaimCenter writes log information to the console only.
For more information about how to create and manage log4j entries in logging.properties, see the following
Apache web site:
http://logging.apache.org/log4j/1.2/manual.html
modules/configuration/config/logging
For example, if you have an environment called test, ClaimCenter looks for logging properties in test-
logging.properties. If this file does not exist, then ClaimCenter reads the logging configuration from the default
logging.properties file.
See also
• “Understanding the ClaimCenter Server Environment” on page 46
• “Configure Logging in a Multiple Instance Environment” on page 36
guidewire.logDirectory = /tmp/gwlogs/ClaimCenter/logs/
Set the log file locations for the individual log4j.appender.category.File entries to the same directory as that
used for guidewire.logDirectory. This ensures that these log files are visible as well from the View Logs screen.
See also
• See “View Logs” on page 358 for more information about the View Logs screen.
To make the Messaging logging category active, remove the hash mark from the following line of code.
log4j.category.Messaging=DEBUG, MessagingLog
Next steps
See also
• “Enabling Logging Categories” on page 31
Procedure
1. Determine whether Guidewire supports the logging category.
2. Copy an existing example logger in the logging properties file and paste it at the end of the file.
3. Modify the copy of an existing logger to create your new logger.
Next steps
See also
• See “Viewing a List of Logging Categories” on page 30 to understand how to determine whether Guidewire
supports the logging category that you want to add.
Level Description
TRACE Messages about processes that are about to start or that completed. These types of messages provide flow-of-control
logging. Trace logging has no or minimal impact on system performance. Typical messages might include:
• Calling plugin.
• Returned from plugin call.
DEBUG Messages that test a provable and specific theory intended to reveal some system malfunction. These messages need
not be details but include information that would be understandable by an administrator. For example, dumping the
Level Description
contents of an XML tag or short document is acceptable. However, exporting a large XML document with no line
breaks is usually not appropriate. Typical messages might include:
• Length of Array XYZ = 2345.
• Now processing record with public ID ABC:123456.
INFO Messages that convey a sense of correct system operation. Typical messages might include:
• Component XYZ started.
• User X logged on to ClaimCenter.
WARN Messages that indicate a potential problem. Examples include:
• An assignment rules did not end in an assignment.
• Special setting XYZ was not found, so ClaimCenter used the default value.
• A plugin call took over 90 seconds.
ERROR Messages that indicate a definite problem. Typical messages might include:
• A remote system refused a connection to a plugin call.
• ClaimCenter can not complete operation XYZ even with a default.
log4j.rootCategory=INFO
See also
• “Enabling Logging Categories” on page 31
• “Set Log Level” on page 357
Procedure
1. Log into ClaimCenter as a user with administrative privileges.
2. Navigate to the Server Tools Set Log Level screen.
3. Expand the Logger drop-down.
In the drop-down list, you can view the ClaimCenter standard logging categories, logging categories for
internal Guidewire code, and logging categories for third-party software.
Next steps
See also
• “Set Log Level” on page 357
Procedure
1. Ensure that the ClaimCenter server is running.
2. Navigate to the following directory in the ClaimCenter installation directory:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Procedure
1. Ensure that the ClaimCenter server is running.
2. Call the following method on the SystemToolsAPI web service:
SystemToolsAPI.getLoggingCategories
# log4j.category.RuleEngine=INFO, RuleEngineLog
log4j.additivity.RuleExecution=false
log4j.appender.RuleExecutionLog=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RuleExecutionLog.encoding=UTF-8
log4j.appender.RuleExecutionLog.File=/tmp/gwlogs/ClaimCenter/logs/ruleexecution.log
log4j.appender.RuleEngineLog.File=/tmp/gwlogs/ClaimCenter/logs/ruleengine.log
Just as with the daily log, the directory location for any specialized log file must exist. The most important settings
to change are the location and name of the log file and the logging threshold for that logging category.
Related information
Apache Logging Services
See also
• “Understanding Logging Categories” on page 29
gw.api.system.CCLoggerCategory
For example, to use this API in Gosu code to perform assignment logging, do something similar to the following:
uses gw.api.system.CCLoggerCategory
...
var logger = CCLoggerCategory.ASSIGNMENT
...
logger.debug("Print out this message.")
There are several ways to view the base configuration logging categories:
• From the Set Log Level Server Tools screen.
• By running the system_tools command from a command prompt and adding the -loggercats option.
Logging Levels
You can use the logger category API to generate logging messages for any valid ClaimCenter logging level. The
following are all valid log levels:
• Trace
• Info
• Debug
• Warn
• Error
The following code is an example of the use of a debug logging statement in a Gosu rule.
The log level that you set here overrides the default logging level set for this category in logging.properties.
log4j.appender.Console.layout=org.apache.log4j.PatternLayout
log4j.appender.Console.layout.ConversionPattern=%-10.10X{server} %-8.24X{userID} %d{ISO8601} %p %m%n
Notice that:
• Apache utility class PatternLayout, a standard part of the Apache log4j distribution, provides the means of
handling string patterns.
• Conversion patterns use control characters, similar to the C language printf function, to specify the output
format for the message.
You can modify the log4j.appender.log.layout.ConversionPattern value to change the information included
in log messages for a log type. For example, to list the logging category for console logs, add %c to the
log4j.appender.Console.layout.ConversionPattern value. You can then filter logs by category.
See also
• “Conversion Character Reference” on page 34
• “Format Modifier Reference” on page 36
Character Description
%% Writes the percent sign to output.
Character Description
%c Name of the logging category. See “Understanding Logging Categories” on page 29 for categories provided with
ClaimCenter.
%C Name of the Java class. Because the ClaimCenter logging API is a wrapper around log4j, %C returns the class
name of the logger. If you want class names in your log messages, include them specifically in the message
rather than by using %C in the conversion pattern.
%d Date and time. Acceptable formats include:
• %d{ISO8601}
• %d{DATE}
• %d{ABSOLUTE}
• %d{HH:mm:ss,SSS}
• %d{dd MMM yyyy HH:mm:ss,SSS}
• ...
ClaimCenter uses %d{ISO8601} by default.
%F Name of the Java source file. Because the ClaimCenter logging API is a wrapper around log4j, %F returns a file
name for the ClaimCenter logging API. If you want file names in your log messages, include them specifically in
the message rather than by using %F in the conversion pattern.
%l Abbreviated format for %F%L%C%M. This outputs the Java source file name, line number, class name and method
name. Because the ClaimCenter logging API is a wrapper around log4j, the information returned is for the
ClaimCenter logging API. If you want information such as class and method names in your log messages, include
them specifically in the message rather than by using %l in the conversion pattern.
%L Line number in Java source. Because the ClaimCenter logging API is a wrapper around log4j, %L returns a line
number from the ClaimCenter logging API. If you want line numbers in your log messages, include them
specifically in the message rather than by using %L in the conversion pattern.
%m The log message.
%M Name of the Java method. Because the ClaimCenter logging API is a wrapper around log4j, %M returns info stri
ng. If you want method names in your log messages, include them specifically in the message rather than by
using %M in the conversion pattern.
%n New line character of the operating system. This is preferable to entering \n or \r\n as it works across platforms.
%p Priority of the message. Typically, either FATAL, ERROR, WARN, INFO or DEBUG. You can also create custom priorities
in your own code.
%r Number of milliseconds since the program started running.
%t Name of the current thread.
%throwable Include a throwable logged with the message. Available format is:
• %throwable – Display the whole stack trace.
• %throwable{n} – Limit display of stack trace to n lines.
• %throwable{none} – Equivalent of %throwable{0}. No stack trace.
• %throwable{short} – Equivalent of %throwable{1}. Only first line of stack trace.
%X The nested diagnostic context. You can use this to include server and user information in logging messages.
Specify a key in the following format to retrieve that information from the nested diagnostic context: %X{key}.
The following keys are available:
• server
• user
• userID
• userName
For example, to include the server name, add %X{server}. For example, to include the server name, add %X{ser
ver}.
There are three options for logging user information in logging patterns:
Character Description
user – prints the numeric opaque ID for the user
userID – a unique user ID string, such as "aapplegate"
userName – a real name, such as "Andy Applegate"
For any of these, specify the minimum and the maximum size of the field. For example: %-16.16X{userName}.
If the actual value is shorter than the minimum field size, the user identifier gets padded with spaces on the
right. If the actual value is longer than the maximum size of the field, the user identifier gets truncated from the
left.
The user key lists a sequence number assigned to the user by the server and is not very informative. To include
user login ID information, instead use the userID key.
Related information
Log4j Pattern Layout
Pattern Description
%N Specifies a minimum width of N for the output. N is an integer. If the output is less than the minimum width, the
logger pads the output with spaces. Text is right-justified.
For example, to specify a minimum width of 30 characters for the logging category, add %30c to the conversion
pattern.
%-N Left-justifies the output within the minimum width of N characters. N is an integer.
For example, to have the logging category left justified within a minimum width of 30 characters, add %-30c to the
conversion pattern.
The default output is right-justified.
%.N Specifies a maximum width of N for the output. N is an integer.
For example, to have the logging category output have a maximum width of 30 characters, add %.30c to the
conversion pattern. The logger truncates output from the beginning if it exceeds the maximum width.
%M.N Pads with spaces to the left if output is shorter than M characters. If output is longer than N characters, then the
logger truncates from the beginning.
%-M.N Pads with spaces to the right if output is shorter than M characters. If output is longer than N characters, then the
logger truncates from the beginning.
Procedure
1. In file config.xml, add an entry to the <registry> element for each cluster server that you want to log.
For example:
guidewire.logDirectory = /tmp/gwlogs/ClaimCenter/logs/
Set this variable to an absolute path to a directory that already exists. You must use forward slashes as the
path separator.
b. Set a value for log4j.appender.DailyFileLog.File that uses the guidewire.logDirectory and that
uses a -D JVM property to set the server ID.
For example, enter something similar to the following:
log4j.appender.DailyFileLog.File=${guidewire.logDirectory}/${gw.serverid.noroles}-cclog.log
The use of the serverid.noroles property suppresses the names of the roles associated with each server.
Otherwise, ClaimCenter lists the server roles associated after the server ID in the log name.
3. Start each server using the system properties that set the environment and server ID values.
For example, if using development Jetty test servers, use the following commands:
As each server starts, the server writes a log file to the common log file directory specified by the value of
guidewire.logDirectory. Each log file includes the value of serverid in the log file name.
In this example, after starting the servers, the /tmp/gwlogs/ClaimCenter/logs directory contains two log
files:
• testserver1-cclog.log
• testserver2-cclog.log
Next steps
If you edit file config.xml, you must rebuild and redeploy ClaimCenter for the changes to take effect. If you update
the logging configuration file, you must also reload this file before your changes take effect.
See also
• “Reloading the Logging Configuration” on page 38
• “Understanding the Configuration Registry Element” on page 46
• “JVM Options and Server Properties” on page 51
• Installation Guide
Procedure
1. In the Studio Project window, update file logging.properties with your logging configuration changes.
2. Ensure that the ClaimCenter application server is running.
3. Open a command prompt in the following location in the ClaimCenter installation directory:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Next steps
See also
• “System Tools Command” on page 422
Procedure
1. In the ClaimCenter Studio Project window, update file logging.properties with your logging configuration
changes.
2. Ensure that the ClaimCenter application server is running.
3. Call the following method on the SystemToolsAPI web service:
SystemToolsAPI.reloadLoggingConfig
Next steps
See also
• Integration Guide
Procedure
1. Ensure that the ClaimCenter application server is running.
2. Open a command prompt in the following location:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Result
The change in the logging level persists while the ClaimCenter application server is running only.
Next steps
See also
• “System Tools Command” on page 422
Procedure
1. Ensure that the ClaimCenter application server is running.
2. Call the following method on the SystemToolsAPI web service:
SystemToolsAPI.updatelogginglevel(logger,level)
Result
The change in the logging level persists while the ClaimCenter application server is running only.
Next steps
See also
• Integration Guide
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the Server Tools Set Log Level screen.
3. Select a logging level from the drop-down list.
4. Enter the new logging level for that category.
Result
The change in the logging level persists only while the ClaimCenter server is running.
Next steps
See also
• “Set Log Level” on page 357
Procedure
1. In the ClaimCenter Studio Project window, open file logging.properties for editing.
2. Add code similar to the following example to the file.
log4j.category.Server.Archiving.Graph=DEBUG,Console
Procedure
1. In the ClaimCenter Studio Project window, open file logging.properties for editing.
2. Add code similar to the following example to the file.
log4j.category.Server.Archiving=INFO, ArchiveLog
log4j.appender.ArchiveLog.File=/tmp/gwlogs/ClaimCenter/logs/archivelog.log
# Archiving loggers
log4j.appender.ArchivedClaimsLog=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ArchivedClaimsLog.encoding=UTF-8
log4j.appender.ArchivedClaimsLog.File=/tmp/gwlogs/ClaimCenter/logs/archivedClaims.log
log4j.appender.ArchivedClaimsLog.DatePattern = .yyyy-MM-dd
log4j.appender.ArchivedClaimsLog.layout=org.apache.log4j.PatternLayout
log4j.appender.ArchivedClaimsLog.layout.ConversionPattern=%-10.10X{server}
%-8.24X{userID} %d{ISO8601} %p %m%n
Server Administration
System Administration Guide 9.0.5
chapter 3
File Description
config.xml File config.xml contains global system parameters that you use to control the behavior of Guidewire
ClaimCenter. These configuration parameters govern large-scale system options, such as
authentication, server clustering, and the business calendar. You access file config.xml in Guidewire
Studio under configuration→config.
database-config.xml File database-config.xml stores database connection information and Data Definition Language
(DDL) options. You access file database-config.xml in Guidewire Studio under configuration→config.
See also
For more information on file config.xml and basic application configuration, see the following:
• “Configuration” on page 359
• Configuration Guide
For more information on file database-config.xml and basic database configuration options, see the following:
• “Database Configuration” on page 219
• “Database Maintenance” on page 259
• Installation Guide
File config.xml contains exactly one required <registry> element. The <registry> element can contain zero to
many <server> and <systemproperty> elements.
The attributes on the various elements have the following meanings.
Notice that:
• The set of valid server roles includes a custom role (custom1).
• The <server> elements define three separate server instances in the production environment, each of which has
specific server roles.
• There is a single server instance in the test environment that has all server roles.
• There are two system property redefinitions, one for env and one for serverid.
Property serverid.noroles
In standard usage, the value of property serverid includes also the list of server roles defined for that server. If you
want to use this property in code, without the list of server roles, use serverid.noroles instead. For example,
instead of using gw.cc.serverid, use gw.cc.serverid.noroles to suppress the list of server roles associated with
this server ID.
<registry roles="…">
<server env="production" serverid="prodserver" roles="batch, messaging, workqueue" />
</registry>
Tomcat -Dgw.cc.env=…
-Dgw.cc.serverid=…
If you are starting the Jetty development server from within ClaimCenter Studio, use the syntax for Tomcat in the
Run - Server configuration dialog, for example:
-Dgw.cc.serverid=testServer
For more information, see “Start the Application Server from ClaimCenter Studio” on page 56.
ClaimCenter determines the value of a -D option in the following manner, using -Dserverid (on Jetty) as an
example:
• If you specify a -Dserverid=prodserver JVM option at the command prompt at server startup, ClaimCenter
sets the value of serverid for that server to prodserver.
• If you do not specify a -Dserverid JVM option at server start, ClaimCenter checks the server registry for a
serverid value defined by a server entry. If found, ClaimCenter uses that value. In the example, the serverid
value is prodserver.
• If you do not specify the JVM option, and no serverid value defined by a server entry exists, ClaimCenter sets
serverid to the host name of the computer. Under some extreme security settings, this value is not available, in
which case ClaimCenter sets the serverid to localhost.
Note: Log entries display only the first 10 characters of the serverid value.
See also
• “Understanding the Configuration Registry Element” on page 46
• “JVM Options and Server Properties” on page 51
• “Cluster Members and Components” on page 386
Then, at server startup, you specify a -Denv="test" JVM option. ClaimCenter ignores any -Denv option that you
specify on the command prompt and sets the env value to standalone.
See also
• “Understanding the Configuration Registry Element” on page 46
• “JVM Options and Server Properties” on page 51
prodserver2 ui
The <registry> element in config.xml defines these servers and server roles as follows:
To add a specialized server role, say, one to use in managing activities, you need merely to add the new server role
to the list of roles:
See also
• “Defining a New Work Queue Role” on page 100
It is possible to use the -Denv and the -Dgw.passthrough JVM options with some, but not all, of the gwb build
commands. The following table indicates whether the listed JVM command options work with the core gwb
commands (tasks).
Core task Can use JVM option Cannot use JVM option
clean •
cleanIdea •
codegen •
compile •
dropDb •
genDataDictionary •
idea •
runServer •
JVM Options and Server Properties 51
System Administration Guide 9.0.5
Core task Can use JVM option Cannot use JVM option
stopServer •
studio •
Examples
For example, to pass -DmySystemProperty=someValue to build command dropDB, use the following command
option.
Then, to make the dropDB command specific to a test environment, use the following command (for a Jetty server).
See also
• “Setting JVM Options in ClaimCenter” on page 52
-Dgw.server.mode=xxxx Starts the ClaimCenter server in the specified server mode. Valid values are:
• dev
• prod
The default is dev.
-Dserverid=aaaa Sets the server ID, and possibly, one or more server roles for the ClaimCenter server:
-Dserverid=aaaa#bbbb • Server ID – Without the hash mark (#), aaaa represents a server ID only.
• Server role – With the hash mark, #bbbb assigns the bbbb server role to the server with aaaa
server ID. Use a comma-separated list, with no spaces, to list multiple server roles.
The exact JVM syntax to use depends on the server type, for example:
• Quickstart (Jetty) – Use -Dserverid=aaaa
• Tomcat – Use -Dgw.cc.serverid=aaaa
See also
• “Assigning Server Roles to ClaimCenter Cluster Servers” on page 50
• “Setting JVM Options in ClaimCenter” on page 52
QuickStart Set the options at server start using the following syntax:
(Jetty) gwb runServer -Denv=…
gwb runServer -Dserverid=…
Tomcat Set the options using the CATALINA_OPTS environment variable. Use the following syntax:
-Dgw.cc.env=…
-Dgw.cc.serverid=…
See also
• “JVM Options for gwb Build Commands” on page 51
• “JVM Options Specific to the runServer Build Command” on page 52
Then, you can have ClaimCenter use the environment-specific parameter by specifying the environment in JVM
options at server startup. Continuing the example, to have BusinessDayStart resolve to 7:00 a.m., specify the test
environment in your JVM options:
However, if ClaimCenter resolves serverid to dev1, ClaimCenter sets BusinessDayStart to the 9:00 a.m. value.
If you define environment-specific parameters, ClaimCenter applies the setting if either the env or server resolves
to a known value. For example, suppose that you specify the BusinessDayStart parameter as follows:
ClaimCenter sets BusinessDayStart to 9:00 a.m. if either env resolves to test or serverid resolves to
prodserver. Thus:
• If ClaimCenter resolves env to test and serverid to chicago, the BusinessDayStart is 9:00 a.m.
• Similarly, if ClaimCenter resolves env to production and serverid to prodserver, the BusinessDayStart is
also 9:00 a.m.
• If env does not resolve to test and server does not resolve to prodserver, ClaimCenter uses the default
BusinessDayStart of 8:00 a.m.
For a list of configuration parameters, including information about which parameters can be set by environment, see
the Configuration Guide.
Configuration Parameters by Environment 53
System Administration Guide 9.0.5
See also
• “Understanding the Configuration Registry Element” on page 46
• “JVM Options and Server Properties” on page 51
The last line setting in this example acts as the default value for the parameter. Of course, you might want the server
to start only if a certain environment is available. In this case, a default is inappropriate.
See also
• “Understanding the Configuration Registry Element” on page 46
• “Configuration Parameters by Environment” on page 53
• “JVM Options and Server Properties” on page 51
This topic discusses the ClaimCenter server, run levels, modes, monitoring servers, and server caching.
Procedure
1. Open a command prompt and navigate to the root of ClaimCenter application directory.
2. Run the following command to compile the needed application resources and move them to the correct
location in the application server:
gwb compile
There is a dependency between the runServer command and the compile command. Guidewire recommends
that you always use the -x compile option with the runServer command to remove that dependency after
you initially run the compile command. Otherwise, ClaimCenter must first verify what resources, if any, need
to be recompiled, then perform an incremental recompile of those resources before starting the server.
Next steps
See also
• “JVM Options Specific to the runServer Build Command” on page 52
• Installation Guide
Procedure
1. Navigate to the following location in ClaimCenter Studio, using the menu bar at the top of the screen:
Run→Run...
Studio opens a Run drop-down with a list of server-related options.
2. Select Edit Configurations.. from the list.
Studio opens the Run - Server dialog.
3. Select Server in the left-hand navigation pane and verify the configuration options set for the server.
It is possible to modify the base configuration server settings by adding additional VM options. If you do so,
use the following format (using server ID an example):
-Dgw.cc.serverid=testServer
4. Click Run.
Studio opens a pane at the bottom of the screen to display the server log. This area also has controls (icons) for
stopping and starting the server.
Stopping GuidewireClaimCenter
Before you stop Guidewire ClaimCenter, you must stop all work queues. Distributed workers run on daemon
threads. As the JVM (Java Virtual Machine) exits, it destroys these threads. This can cause issues if the JVM
destroys a thread while that thread is processing a work item. For example, suppose that a work queue calls a plugin
that makes a blocking call to an external system or otherwise take a long time to return. In that case, if you do not
shut down the work queue threads correctly, it is possible to end up with inconsistent data.
b. For any process that has a Next Scheduled Run time that is before the time that you intend to stop
ClaimCenter, click Stop in the Schedule column.
All processes must have a Status of Completed before you stop the ClaimCenter server.
4. Stop Guidewire ClaimCenter:
• To stop ClaimCenter in a production environment, stop the server on which it is running.
• To stop ClaimCenter in a development environment, run the following command from the ClaimCenter
installation directory:
gwb stopServer
Server Modes
Server mode determines what functionality is available at various server run levels. All ClaimCenter server types,
except for QuickStart, can run in any of the following server modes:
• Development
• Test
• Production
ClaimCenter starts in production mode on all supported servers by default, except for the QuickStart server.
ClaimCenter on the QuickStart server always runs in development mode. You cannot run ClaimCenter on the
QuickStart server in production or test mode.
See also
• “Server Test Mode” on page 58
• “Server Run Levels” on page 59
• “Setting the Server Mode” on page 58
• “Set the QuickStart Run Level at Server Start” on page 61
• Installation Guide
-DserverMode={dev|prod|test}
To change the mode of a running server, restart the server and set the -DserverMode parameter to dev, test or prod.
ClaimCenter ignores this parameter on the QuickStart server.
2 SHUTDOWN —
3 NODAEMONS maintenance
4 DAEMONS daemons
5 MULTIUSER multiuser
The following list describes each type of server run level in more detail.
Type Description
QuickStart run Set at QuickStart server start using the following command, with n being a specific run level number:
level gwb runServer --run-level n
See “Set the QuickStart Run Level at Server Start” on page 61.
Server run level Shown in the server log. The server starts at level 0 and proceeds to move through each server run level in
the sequence until arriving at the requested run level.
System run level Set through command prompt system_tools options, for example:
system_tools -maintenance
See “System Tools Command” on page 422 for details.
Server run levels are independent of the server mode. The combination of mode and run level determines the
availability of functionality, such as the user interface and web services.
See also
• “Server Modes” on page 57
• “Server Modes and Server Run Levels” on page 60
• “Place the Server in Maintenance Mode” on page 63
Server Modes 59
System Administration Guide 9.0.5
• Installation Guide
• Integration Guide
The value of n is the numeric value of the run level as defined in “Server Run Levels” on page 59.
admin/bin
You must supply the username (user) and password (password) for a user with administrative privileges on
the ClaimCenter server. The run level is a value as defined in “Server Run Levels” on page 59.
SystemToolsAPI.setRunLevel
Next steps
If you run ClaimCenter in a clustered environment, you cannot place all the computers in a particular run level with
a single method call. Instead, you must call the method individually on each cluster member.
The returned message indicates the server run level. The possible responses are:
• MULTIUSER
• DAEMONS
• MAINTENANCE
• STARTING
See also
• “Server Run Levels” on page 59
admin/bin
You must supply the username (user) and password (password) for a user with administrative privileges on
the ClaimCenter server.
SystemToolsAPI.getRunLevel
Next steps
See also
• Integration Guide
ClaimCenter still allows connections made through APIs or command prompt tools for any daemons with a
minimum run level equal or lower than NODAEMONS. Restricting the run level permits integration processes to
proceed without interference from non-administrator users.
See also
• “Server Run Levels” on page 59
• “Set the QuickStart Run Level at Server Start” on page 61
• “Set the Server Run Level Through System Tools” on page 61
• “Set the Server Run Level Through Web Services” on page 61
admin/bin
You must supply the username (user) and password (password) for a user with administrative privileges on
the ClaimCenter server.
See also
• “Monitoring Server Status with WebSphere” on page 63
• “Monitoring Cluster Health” on page 153
• Documentation specific to the application server
• Installation Guide
Column Meaning
header
Safe Ordering ClaimCenter groups messages for each messaging destination based on their associated primary object.
Object Name (ClaimCenter processes messages associated with objects other than the primary object as non-safe-ordered
messages.)
Send Time Time the ClaimCenter sent the message.
Failed A message can fail for several reasons, for example:
• The message destination did not process the message successfully due to a processing error.
• The message destination returns a negative acknowledge (nack) indicating that the message delivery
failed.
• The message was part of a series of messages, one or more of which failed.
Retryable Error Waiting to attempt a retry. ClaimCenter attempted to send the message but the destination threw an
exception. If the exception was retryable, ClaimCenter automatically attempts to retry the send before turning
the message into a failure. ClaimCenter attempts to send an event message several times. Typically, you can
configure the number of retries and the interval between them for an integration. Review documentation for
the specific destination to find out how to configure it.
In Flight ClaimCenter is waiting for an acknowledgement.
Unsent The message has not been sent, for example:
• The message is waiting on a prior message.
• The destination is not processing messages as the destination is in a state of suspension.
• The destination is falling behind in processing messages.
Error Message Error message returned if a message fails.
It is possible to filter the messages that show in the table by selecting a filtering characteristic from the filtering
drop-down list.
Message Handling
A ClaimCenter server reads integration messages from a queue and dispatches them to their destinations. However,
there is no guarantee that messages in the queue are ready for dispatching in the same order in which ClaimCenter
places the messages in the queue.
For example, suppose that a messaging server starts writing message 1 to the queue, and then starts writing message
2 to the same queue. It is possible that the server completes and commits message 2 while still writing message 1.
This does not, in itself, present an issue. However, if the server attempts to read messages off the queue at this
moment, then it skips the uncommitted message1 and reads message 2. You are most likely to encounter this
situation in a clustered ClaimCenter environment.
To address this situation, ClaimCenter provides the IncrementalReaderSafetyMarginMillis parameter in file
config.xml. This parameter determines how long after detecting a skipped message that ClaimCenter attempts to
read messages again. This waiting period gives ClaimCenter a chance to commit the skipped message. If it is not
possible to commit the message before the expiration of the waiting period:
• ClaimCenter assumes the message is lost and that it is not possible to commit the message.
• ClaimCenter skips the message permanently, thereafter.
For example, in the previous scenario, ClaimCenter waits 10 seconds (the default parameter value) before
attempting to read messages again, beginning with the skipped message 1. If message 1 has still not been committed
at that time, ClaimCenter skips it permanently.
Set the IncrementalReaderSafetyMarginMillis parameter sufficiently long so that the server can commit the
messages, but without prematurely marking messages as permanently skipped. As the server does not read any other
messages during this waiting period, do not set IncrementalReaderSafetyMarginMillis so long as to delay the
delivery of messages.
You can also set the following configuration parameters in config.xml to configure the messages reading
environment:
• IncrementalReaderPollIntervalMillis
• IncrementalReaderChunkSize
Procedure
1. In the ClaimCenter Studio Project window, navigate to configuration→config→Messaging.
2. Double-click file messaging-config.xml to open the file in the Studio Messaging editor.
Next steps
See also
• Configuration Guide
Procedure
1. Ensure that the Guidewire ClaimCenter server is running.
2. Open a command prompt in the following location in the ClaimCenter installation directory:
admin/bin
For example, the following command purges all completed messages received prior to 02/06/06.
You must supply the username (user) and password (password) for a user with administrative privileges.
Session Timeout
ClaimCenter creates a session for each browser connection. ClaimCenter uses the server’s session management
capability to manage the session. Each individual session receives a security token that the ClaimCenter server
preserves across multiple requests. The server validates each token against an internal store of valid tokens.
You configure the timeout value for a session by setting the SessionTimeoutSecs parameter in config.xml. This
value sets the session expiration timeout globally for all ClaimCenter browser sessions.
Typically, the server determines the session timeout value according to the following hierarchy.
Level Description
Server The session timeout to use for all applications on the server if the timeout value is not set at a higher level.
Enterprise The session timeout specified at the enterprise application level. You can specify this value at the EAR file
application level. You can set the enterprise application session timeout value to override the server session timeout
value.
Web application The session timeout specified at the web application level. You can specify this value at the WAR file
level.You can set the web application session timeout value to override the enterprise application and server
session timeout values.
Application The session timeout specified in the application web.xml file. ClaimCenter does not specify a session
level timeout in web.xml.
Application An application can override any other session timeout value. ClaimCenter uses the session timeout value
code specified by the SessionTimeoutSecs parameter in config.xml.
User An administrator can set the session expiration timeout value on an individual user basis, using the Session
timeout field on the ClaimCenter User screen.
See also
• Configuration Guide
See also
• “Planning a ClaimCenter Cluster” on page 149
• Installation Guide
Cache Management
Objects do not remain forever present or valid in the ClaimCenter database cache. Guidewire provides several
caching mechanisms to verify that cache entries are still relevant. They are:
• Stale timeout
• Eviction timeout
• Cluster member object tracking
Stale Timeout
A stale timeout mechanism ensures that the server instance does not use old object entries excessively. An object is
stale if it has not been refreshed from the database within a configurable amount of time. Upon accessing a cache
entry, the server instance calculates the duration since the object was last read from the database. If that duration
exceeds the stale time, the server instance refreshes the cache entry from the database.
To avoid increased complexity, ClaimCenter prefers this mechanism over evicting objects upon stale timeout. You
can set a default stale time by adjusting the GlobalCacheStaleTimeMinutes parameter in config.xml.
Eviction Timeout
A ClaimCenter evict timeout mechanism removes old objects from the cache. For example, an object has an evict
time of 15 minutes and a stale time of 30 minutes. If the server uses the object a single time every 14 minutes,
ClaimCenter never evicts the cache entry, but the entry does eventually become stale.
You can set the default evict time by adjusting the GlobalCacheReapingTimeMinutes parameter in config.xml. In
the base configuration Guidewire sets the value of GlobalCacheReapingTimeMinutes to 15 minutes. The minimum
value for this parameter is 1 minute. The effective maximum value for parameter
GlobalCacheReapingTimeMinutes is the lesser of its set value or the value of GlobalCacheStaleTimeMinutes
parameter.
69
System Administration Guide 9.0.5
See also
• “Server Cache Tuning Parameters” on page 71
Cache Thrashing
Cache thrashing is a phenomenon whereby evictions remove cache entries prematurely and force additional database
reloads that are detrimental to performance. There are several cases that can lead to cache thrashing:
• A single data set can be too large to reside in the global cache. This forces the server to load the same data from
the database and subsequently evict the data, potentially thousands of times, while loading a single web page.
This results in serious performance issues.
• Some concurrent actions result in thrashing. For example, a user logs onto a server that has the batch server role.
A batch job, which can load many objects into the cache, can remove objects from the cache. This forces the
server to reload the cache as the user again needs those objects. For this reason, Guidewire recommends that you
have separate servers for handling batch processing and user interface transactions.
If an individual cache reports hundreds or thousands of evictions and a low cache hit rate, then that cache is
experiencing cache thrashing. If you notice cache thrashing on a server that is not processing batch jobs, re-size the
cache. Otherwise, dedicate the server to batch jobs.
See also
• “Detect Cache Thrashing” on page 70.
Procedure
1. Log into Guidewire ClaimCenter as a user with administrative privileges.
2. Press ALT+SHIFT+T to open the Server Tools screens.
3. Navigate to the Cache Info screen.
4. Use the information in the Cache Info screen to analyze the number of evictions.
5. Click Clear Global Cache to clear the cache.
6. Reproduce the operation that you suspect caused the cache thrashing.
7. Reanalyze the information on the Cache Info screen.
Next steps
After you have taken the proper action, repeat the analysis to verify that the change yielded the results you expected
to occur.
See also
• “Cache Thrashing” on page 70.
See also
• “Server Cache Tuning Parameters” on page 71.
• “Server Memory Management” on page 73
Parameter Description
ExchangeRatesRefreshIntervalSecs The number of seconds between refreshes of the exchange rates cache.
ClaimCenter uses this specialized cache for exchange rates only.
GlobalCacheActiveTimeMinutes Time, in minutes, that ClaimCenter considers cached objects active. You
can think of this as a period in which ClaimCenter is heavily using an item,
for example, how long a user stays on a screen. The cache mechanism
gives higher priority to preserving these higher-use objects.
Set GlobalCacheActiveTimeMinutes to a value less than GlobalCacheRea
pingTimeMinutes.
GlobalCacheDetailedStats Boolean value that specifies whether to collect detailed statistics for the
global cache. Detailed statistics are data that ClaimCenter collects to
explain why the caching mechanism evicted items from the cache.
ClaimCenter collects basic statistics, such as the miss ratio, regardless of
the value of GlobalCacheDetailedStats. Disabling collection of detailed
cache statistics can sometimes improve performance.
Guidewire sets the value of GlobalCacheDetailedStats to false by
default. Set the parameter to true to help tune your cache.
If the GlobalCacheDetailedStats parameter is set to false, the Cache Info
screen does not include the Evict Information and Type of Cache Misses graphs.
At runtime, use the Management Beans screen to enable the collection of
detailed statistics for the global cache.
GlobalCacheReapingTimeMinutes Time, in minutes, since the last use of a cached object before ClaimCenter
considers the object eligible for reaping. This can be thought of as the
period during which ClaimCenter is most likely to reuse an object.
An evict timeout mechanism removes old objects from the cache. Once per
minute, a thread evicts cache entries that have not been used for a period
equal to or greater than GlobalCacheReapingTimeMinutes. This
mechanism differs from the stale timeout mechanism. The stale timeout
mechanism refreshes from the database those cache entries that have
exceeded the stale time. This process occurs as the server accesses a
cached object. The evict timeout mechanism deletes any cache entries that
are older than the default evict time. An object can become stale but not
evicted if it is continually in use. For example, a bean has an evict time of
15 minutes and a stale time of 30 minutes. If the server uses the object
once every 14 minutes, ClaimCenter never evicts the cache entry, but the
entry does eventually become stale.
GlobalCacheReapingTimeMinutes is initially set to 15 minutes. The
minimum value for this parameter is 1 minute. Since the eviction thread
only runs once per minute, a smaller value would not make sense. The
maximum value for this parameter is 15 minutes.
GroupCacheRefreshIntervalSecs The number of seconds between refreshes of the groups cache.
ClaimCenter uses this specialized cache for group-related data only.
GlobalCacheSizeMegabytes Maximum amount of heap space used to store cached entities, expressed
as a number of megabytes. This parameter supersedes the value of Global
CacheSizePercent.
71
System Administration Guide 9.0.5
Parameter Description
At runtime, you can use the Cache Info or Management Beans screen to modify
this value.
GlobalCacheSizePercent Maximum amount of heap space used to store cached entities, expressed
as a percentage of the maximum heap size.
GlobalCacheStaleTimeMinutes Time, in minutes, after which ClaimCenter considers an object in the cache
stale if it has not been refreshed from the database.
A stale timeout mechanism ensures that the server does not use
excessively old object entries. An object is stale if it has not been refreshed
from the database within a configurable amount of time. Upon accessing a
cache entry, the server calculates the duration since the object was last
read from the database. If that duration exceeds the stale time, the server
refreshes the cache entry from the database. To avoid increased
complexity, ClaimCenter prefers this mechanism over evicting objects upon
stale timeout.
At runtime, you can use the Cache Info or Management Beans screen to modify
this value.
GlobalCacheStatsWindowMinutes This parameter denotes a period of time, in minutes, that ClaimCenter uses
for two purposes:
• The period of time to preserve the reason that ClaimCenter evicted an
object, after the event occurred. If a cache miss occurs, ClaimCenter
reports the reason on the Cache Info screen.
• The period of time to display statistics on the chart on the Cache Info
screen.
ScriptParametersRefreshIntervalSecs The number of seconds between refreshes of the script parameters cache.
ClaimCenter uses this specialized cache for script parameters only.
ZoneCacheRefreshIntervalSecs The number of seconds between refreshes of the zones cache. ClaimCenter
uses this specialized cache for zones only.
See also
• “Special Caches for Rarely Changing Objects” on page 73
• “Cache Info” on page 398
• “Management Beans” on page 386
• Configuration Guide
See also
• For information on how to view cache performance, see “Cache Info” on page 398.
Group GroupCacheRefreshIntervalSecs
ScriptParameter ScriptParametersRefreshIntervalSecs
Zone ZoneCacheRefreshIntervalSecs
See also
• “Server Cache Tuning Parameters” on page 71.
• Configuration Guide
serverName 2016-04-09 16:44:14,423 INFO Memory usage: 80.250 MB used, 173.811 MB free, 254.062 MB total,
2048.000 MB max
The following list describes the different types of information that you see in a memory logging message.
73
System Administration Guide 9.0.5
It is common for a server to use up the maximum amount of memory fairly quickly, so that used and total are at or
near the max value. This indicates normal operation. If the server needs more memory, it triggers garbage collection
to free up the memory used by stale objects.
IBM provides the IBM Support Assistant. You can install multiple plugins within this tool. Several plugins are
available for the IBM JVM and WebSphere. These tools provide deep analysis of JVM behavior, spot issues, and
recommend how to tune the JVM.
http://www.ibm.com/software/support/isa/
Related information
IBM Developer Kit and Runtim Environment - Diagnostics Guide
Related information
Troubleshooting Guide for Java SE 6 with HotSpot VM
case that you must provide some configuration to allow the account running the ClaimCenter instance to create
such large files.
• The generation of a heap dump during out-of-memory conditions is sometimes challenging. As a JVM is
reaching maximum memory utilization, it generally experiences severely degraded performance. As the pace of
the leak decreases gradually, the occurrence of the out-of-memory condition might take an inordinate amount of
time. This length of time might be incompatible with the need to restore performance for users or processes.
• Windows only: Windows does not support signals. Therefore, generating a heap by starting the JVM with a heap
dump on CTRL-BREAK, depends on the capacity to send a CTRL-BREAK. You cannot send a CTRL-BREAK to a JVM
started as a background process. Therefore, for the time of the investigation, start the JVM from a command
prompt rather than as a background process.
• The JVM generally provides optional flags that prevent it from listening to signals. Disable these flags while
trying to generate a heap dump through signals.
• Heap dump analysis is very memory intensive. Assume that the tool used to analyze the heap dump might need a
heap two to three times larger than the amount of objects captured in the heap dump. Host the heap dump
analyzer on a server with a 64-bit JVM and a significant amount of memory. If such a configuration is not
available, you might want to reduce the heap size so that the memory leak reaches an identifiable threshold
sooner. This method allows the generation of smaller, easier to analyze heap dumps.
• Heap dump analysis tools generally point to the CacheImpl class as the largest memory consumer. This class
corresponds to the Guidewire cache. It is normal that the cache consumes a few hundred megabytes. In this case,
the memory issue is likely not caused by cache growth. If the cache consumes significantly more memory, you
might need to be downsize the cache. See .
Related information
IBM Support Assistant
IBM DTJF adapter for Eclipse Memory Analyzer
Java VisualVM
Related information
Using JConsole to Monitor Applications
Java Heap Analysis Tool
Java VisualVM
JVM Profiling
Java profilers are available for two main purposes:
Memory Identifies memory usage, and, more specifically, memory leaks due to referenced but unused objects.
profiling
CPU profiling Helps identify programmatic hot spots and bottlenecks. This analysis might help remove the corresponding
bottlenecks, thereby increasing performance.
Guidewire has used two profiling tools internally and found each to be of good quality. Both tools provide both
memory and CPU profiling:
• Guidewire recommends YourKit for memory profiling.
• Guidewire recommends JProfiler for CPU profiling.
To profile ClaimCenter, load the profiler agent into the ClaimCenter JVM either as it is starting ClaimCenter or by
attaching the profiler agent to a running JVM. Refer to the profiler documentation for instructions.
Geocoding is the process of assigning a latitude and longitude to an address. Guidewire supports geocoding in
ClaimCenter, PolicyCenter, and ContactManager. If enabled, ClaimCenter uses geocoding in producing ClaimCenter
catastrophe searches and heat maps.
Understanding Geocoding
The geocoding process assigns latitudes and longitudes to addresses. Software then uses geocoded addresses to
present users with geographic information, such as the distance between two addresses. All primary addresses in
ClaimCenter, PolicyCenter, and ContactManager are candidates for geocoding.
Related references
“Geocode Writer Batch Processing” on page 118
Related information
Bing Maps Dev Center
Configuring Geocoding
Configuring geocoding involves the following tasks:
• Enabling your implementation of the GeocodePlugin plugin
• Setting geocoding feature parameters
• Scheduling the Geocode work queue
• Configuring the number of Geocode batch processing workers
IMPORTANT Schedule geocode batch processing for ClaimCenter and ContactManager with sufficient processing
windows between runs to assure sufficient time for runs to fully process the work items in the work queues. If you
find duplicate work items in the work queues for the same address ID, extend the interval between runs.
recommends that you configure the geocoding process with a sufficient number of worker instances before you start
your production servers.
The default configuration specifies one worker instance. Worker instances pass addresses from the work queue to
the GeocodePlugin plugin implementation. Consider increasing the number of worker instances to improve
throughput. To further improve throughput, assign worker instances to run on multiple servers.
See also
• “Understanding Geocoding” on page 79
Parameter Description
UseGeocodingInPrimaryApp Set to true to enable geocoding in the ClaimCenter user interface, such
as in assignment or user search screens.
The default is false.
UseGeocodingInAddressBook Set to true if you have ClaimCenter integrated with ContactManager
and ContactManager has geocoding enabled for vendors. This setting
enables vendor search in the ClaimCenter and ContactManager user
interfaces.
The default is false.
UseMetricDistancesByDefault If true, ClaimCenter uses kilometers and metric distances instead of
miles and United States distances for driving distance or directions.
Set this parameter identically in Guidewire applications that use
geocoding.
The default is false.
ProximitySearchOrdinalMaxDistance A distance that provides an approximate bound to improve
performance of an ordinal (nearest n) proximity search. This distance is
in miles, unless you set UseMetricDistancesByDefault to true. The
search can return results that are farther away than the distance
specified.
Set this parameter identically in Guidewire applications that use
geocoding.
This parameter has no effect on radius (within n miles or kilometers)
proximity searches or walking-the-group-tree-based proximity
assignment.
The default is 300.
ProximityRadiusSearchDefaultMaxResultCount The maximum number of results to return if performing a radius
(within n miles or kilometers) proximity search. This parameter has no
effect on ordinal (nearest n items) proximity searches. This parameter
does not have to match the value of the corresponding parameter in
other Guidewire applications.
The default is 1000.
Configuring Geocoding 81
System Administration Guide 9.0.5
4. Ensure that the Class field specifies the Bing Maps implementation class:
gw.plugin.geocode.impl.BingMapsPluginRest
5. Under Parameters, specify the following:
applicationKey The application key that you obtained from Bing Maps.
geocodeDirectionsCulture The locale for geocoded addresses and routing instructions returned from Bing Maps. For
example, use the locale code ja-JP for addresses and instructions for Japan. The plugin uses en
-US if you do not specify a value. For a current list of codes that Bing Maps supports, refer to
the following web site
http://msdn.microsoft.com/en-us/library/cc981048.aspx
imageryCulture The language for map imagery. For example, use the language code ja for maps labeled in
Japanese. The plugin uses en if you do not specify a value. For a current list of codes that Bing
Maps supports, refer to the following web site
http://msdn.microsoft.com/en-us/library/cc981048.aspx
mapUrlHeight Height of maps, in pixels. The plugin uses 500 if you do not specify a value.
mapUrlWidth Width of maps, in pixels. The plugin uses 500 if you do not specify a value.
Next steps
See also
• Integration Guide
Procedure
1. In Guidewire Studio, navigate to configuration→config→scheduler and open scheduler-config.xml for
editing.
2. Uncomment the following section in scheduler-config.xml file.
<ProcessSchedule process="Geocode">
<CronSchedule hours="1" minutes="30"/>
</ProcessSchedule>
Next steps
See also
• “The Work Queue Scheduler” on page 93
• “Understanding a Work Queue Schedule Specification” on page 93
<work-queue workQueueClass="com.guidewire.pl.domain.geodata.geocode.GeocodeWorkQueue"
progressinterval="600000">
<worker instances="1"/>
</work-queue>
Next steps
See also
• “Worker Configuration” on page 98
• “Database Statistics Batch Processing” on page 114
Procedure
1. Decide which geocoding service you want to use, then configure a geocoding plugin implementation to work
with that service.
If you use the Microsoft Bing Maps service, you can use the default geocoding plugin interface supplied with
ClaimCenter.
2. Enable your implementation of the geocode plugin.
3. In the ClaimCenterStudio Project window, expand configuration→config:
a. Double-click config.xml to open it.
b. Locate the following parameters and set them as shown:
Set Credential to the value that you obtained during the licensing process for the Bing Maps Ajax
Control.
4. Save your changes.
5. Restart the application server.
<ProcessSchedule process="CatastrophePolicyLocationDownload">
<CronSchedule hours="2"/>
</ProcessSchedule>
2. Depending on whether you are working in a development or production environment, do one of the following:
Next steps
See also
• “The Work Queue Scheduler” on page 93
• “Catastrophe Policy Location Download Batch Processing” on page 108
• Installation Guide
ClaimCenter provides tools for configuring and managing various forms of batch processing. You can schedule
batch processing to run regularly or on demand.
See also
• “Batch Process Info” on page 349
• “Work Queue Info” on page 352
• Integration Guide
Work queue
A work queue operates on a batch of items in parallel. ClaimCenter distributes work queues across all servers in a
ClaimCenter cluster that have the appropriate role. In the base configuration, Guidewire assigns this functionality to
the workqueue server role.
A work queue comprises the following components:
• A processing thread, known as a writer, that selects a group (batch) of business records to process. For each
business record (a claim record, for example), the writer creates an associated work item.
• A queue of selected work items.
• One or more tasks, known as workers, that process the individual work items to completion. Each worker is a
short-lived task that exists in a thread pool. Each work queue on a cluster member shares the same thread pool.
By default, each work queue starts a single worker on each server with the appropriate role, unless configured
otherwise.
Work queues are suitable for high volume batch processing that requires the parallel processing of items to achieve
an acceptable throughput rate.
Batch process
A batch process operates on a batch of items sequentially. Batch processes are suitable for low volume batch
processing that achieves an acceptable throughput rate as the batch process processes items in sequence. For
example, writers for work queues operate as batch processes because they can select items for a batch and write
them to their work queues relatively quickly.
See also
• “Server Roles” on page 141
• “Work Queues” on page 86
• “Batch Processes” on page 89
• Integration Guide
Work Queues
A work queue comprises the following components.
Writer A writer thread selects units of work for batch processing and writes their identities to a work queue.
Work A work queue is a database table that the writer loads with a batch of work items and from which workers check
queue out work items for processing.
Worker One or more worker tasks that check out work items from the work queue and process them to completion. By
default, each work queue starts a single worker on each cluster member with the workqueue role, unless
configured otherwise.
Starting the writer initiates a run of batch processing on a work queue. The batch is complete if the workers exhaust
the queue of all work items, except those that they failed to process successfully.
YES
Server A Server C
Server B
C C
A B
YES
Workers
Table empty?
After it checks out a quota of work items, the worker task processes them sequentially. Whenever a worker
completes a work item successfully, it deletes the item from the table and begins to process the next item. The
standard work item table (StandardWorkQueue) is retireable, so successfully completed work items remain in the
table for historical reference.
Note: In rare cases, it is possible for ClaimCenter to notify a worker of work, but, the worker finds no work is
available after it awakens. For example, for small batch runs, a worker can check out all items in the batch with its
first check out quota of items. This action can occur between the time the writer notifies the workers and other
workers awaken. If a worker awakens and finds no work items, the worker goes back to sleep.
configuration→config→workqueue
You can view and manage work queues from the Server Tools Work Queue Info screen in ClaimCenter, if you have
the appropriate administrative privilege.
See also
• “The Work Queue Scheduler” on page 93
• “Performing Custom Actions After Batch Processing Completion” on page 100
• “Work Queue Info” on page 352
Batch Processes
ClaimCenter distributes batch processes across all ClaimCenter cluster members that have the batch server role.
Each server with the batch role also has a batch process lease manager that acquires and manages the batch process
leases on that server. In this context, a lease represents a single run of a single batch process.
Available servers with the batch role compete for available batch processing leases. After a server acquires a lease,
that server runs the batch process to completion.
How aggressively the cluster servers compete with each other depends on how many batch processes each one is
individually already running. Those servers running fewer or no batch processes are more likely to acquire a new
batch process lease than other servers already busy running processes. It is possible to configure this behavior.
For scheduled batch processes, a scheduler component, running on a cluster member with the scheduler role,
decides to start a batch process according to the published schedule. The scheduler first creates a new lease for the
batch process in the database. All cluster members with the batch server role then compete to acquire that lease.
The cluster member that wins the competition starts the batch process.
The number of worker tasks is important if zero (0). Setting the number of worker tasks to 0 disables the work queue
as there are no workers to perform the work. To enable the work queue, set the number of workers to greater than
zero.
How to Run a Batch Process from the Batch Process Info Screen
It is possible to run batch processes from the Server Tools Batch Process Info screen in ClaimCenter. ClaimCenter
enables the Run button on this screen for all batch process types that belong to the BatchProcessTypeUsage
category UIRunnable.
To access the Batch Process Info screen, you must have the internaltools permission. The Admin role has this
permission by default. Alternatively, if the EnableInternalDebugTools parameter is set to true in config.xml and
the server is running in development mode, all users can access to the Batch Process Info screen.
Procedure
1. Log in to ClaimCenter.
2. Press ALT+SHIFT+T to display the Server Tools tab.
3. Navigate to Work Queue Info.
4. Click Run Writer in the Actions column of the work queue that you want to run.
Procedure
1. Log in to ClaimCenter.
2. Press ALT+SHIFT+T to display the Server Tools tab.
3. Navigate to Batch Process Info.
4. Click Run in the Action column of the batch process that you want to run.
Procedure
1. Start the ClaimCenter server if it is not already running.
2. Open a command prompt.
3. Navigate to the following location in the ClaimCenter installation directory:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
For the process value, specify a valid process code. For the process code for each batch processing type,
including writers for work queues, consult the reference topic for the individual batch processing type.
Next steps
See also
• “Work Queues and Batch Processes, a Reference” on page 103
Procedure
1. Start the ClaimCenter server if it is not already running.
2. Open a command prompt.
3. Navigate to the following location in the ClaimCenter installation directory:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
For the process value, specify a valid process code. For the process code for each batch processing type,
including writers for work queues, consult the reference topic for the individual batch processing type.
Next steps
See also
• “Work Queues and Batch Processes, a Reference” on page 103
Procedure
1. Start the ClaimCenter server if it is not already running.
2. Open a command prompt.
3. Navigate to the following location in the ClaimCenter installation directory:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
For the process value, specify a valid process code. For the process code for each batch processing type,
including writers for work queues, consult the reference topic for the individual batch processing type.
Result
For work queues, executing this command returns the status of the writer process. The command does not check
whether any work items remain in the work queue. Thus, it is possible for the process status to report as being
complete after the writer finishes adding items to the work queue. However, it is possible that there are work items
that need processing that remain in the work queue.
Next steps
See also
• “Work Queues and Batch Processes, a Reference” on page 103
The process attribute sets the process to run. The env attribute is an optional attribute that specifies the environment
in which the schedule definition for the process applies. The schedule_attributes value is a valid schedule
specification.
If needed, you can list multiple ProcessSchedule entries for the same process. The process then runs according to
each specified schedule. If you schedule a process to run while the same process is already running, then
ClaimCenter skips the overlapping process. If the scheduler-config.xml file does not list a process, then the
process does not run.
Generally, schedule the amount of time between batch process runs in hours as opposed to minutes. This is because
some batch processes require a lot of server resources. Schedule such processes to wake infrequently or at times that
the server is less busy, such as late at night or very early in the morning.
You may want to schedule some ClaimCenter batch processes to run periodically throughout the business day. For
example, the default configuration of ClaimCenter schedules the ActivityEsc batch process to run every 30
minutes. Exclude running such batch processes periodically during your nightly batch processing window. Instead,
wait until the end of the batch window to run them. For example, schedule the ActivityEsc batch process to run
every 30 minutes except during your nightly batch window. Alternatively, run such batch processes at prescribed
places in your chain of nightly batch process.
The ClaimCenter scheduler uses the ClaimCenter server time for reference.
See also
• “Understanding a Work Queue Schedule Specification” on page 93
<CronSchedule schedule_attributes/>
Use this element to define a schedule_attributes value to specify the exact timing, such as once every hour or
every night at a certain time. The schedule_attributes value is a combination of one or more of the following
attributes:
IMPORTANT If you do not provide a value for a defined schedule attribute, the scheduler uses its default value in
determining the work queue schedule. For example, if you do not specify a value for the hours attribute, the
scheduler assumes a value of * and ClaimCenter runs the work queue process every hour. Thus, Guidewire
recommends that you provide a value for each scheduler attribute. If you do not provide a value for a specific
attribute, carefully review that attribute's default value and determine if the default value meets your business
needs.
Character Meaning
* Indicates all values. For example, minutes="*" means run the process every minute.
? Indicates no specific value. Used only for dayofmonth and dayofweek attributes. See the examples for clarification.
- Specifies ranges. For example, hour="6-8" specifies the hours 6, 7 and 8.
, Separates additional values. For example, dayofweek="MON,WED,FRI" means every Monday, Wednesday, and
Friday.
/ Specifies increments. For example, minutes="0/15" means start at minute 0 and run every 15 minutes.
L Specifies the last day. Used only for dayofmonth and dayofweek attributes. See the examples for clarification.
W Specifies the nearest weekday, use only with dayofmonth. For example, if you specify 1W for dayofmonth, and that
day is a Saturday, the trigger then fires on Monday the 3rd. You can combine this with L to schedule a process for
the last weekday of the month by specifying dayofmonth="LW".
# Specifies the nth day of the week within a month. For example, a dayofweek value of 4#2 means the second
Wednesday of the month (day 4 = Wednesday and #2 = the second Wednesday in the month).
These represent only some of the values that you can use for setting schedule.
Scheduler Examples
The following table lists a few examples of how to work with the <CronSchedule> element.
Example Description
<CronSchedule hours="10" /> Run every day at 10 a.m.
Example Description
<CronSchedule hours="0" /> Run every night at midnight.
<CronSchedule minutes="15,45" /> Run at 15 and 45 minutes after
every hour.
<CronSchedule minutes="0/5" /> Run every five minutes.
<CronSchedule hours="0" dayofmonth="1" /> Run at midnight on the first day of
the month.
<CronSchedule hours="12" dayofweek="MON-FRI" dayofmonth="?" /> Run at noon every weekday (without
regard to the day of the month).
<CronSchedule hours="22" dayofmonth="L" /> Run at 10 p.m. on the last day of
every month.
<CronSchedule hours="22" dayofmonth="L-2" /> Run at 10 p.m. on the second-to-last
day of every month.
<CronSchedule minutes="3" hours="8-18/2" dayofweek="1-5" dayofmonth="?"/> Run 3 minutes after every other
hour, 8:03 a.m. to 6:03 p.m.,
Monday through Friday.
<CronSchedule minutes="*/15" hours="0-8,18-23"/> Run every 15 minutes after the hour,
12:15 a.m. to 8:45 a.m. and 6:15
p.m. to 11:45 p.m.
<CronSchedule hours="0" dayofmonth="6L" /> Run at midnight on the last Friday of
the month.
<CronSchedule hours="4" dayofmonth="4#2" /> Run at 4 a.m. on the second
Wednesday of the month.
Related information
Quartz Documentation
Procedure
1. Log into ClaimCenter as an administrative user.
2. Press ALT+SHIFT+T to access Server Tools.
3. Navigate to the Batch Process Info screen.
4. Select Schedulable from the processes drop-down filter.
ClaimCenter displays only those batch processes, including work queue writers, that it is possible to schedule
in file scheduler-config.xml.
Procedure
1. Log in to ClaimCenter as an administrative user.
2. Press ALT + SHIFT + T to access Server Tools.
3. Navigate to the Batch Process Info screen.
4. Click the Next Scheduled Run column header to sort processes by schedule.
If there is no current schedule for a process, the Next Scheduled Run field is blank.
In this way, you can have different results for batch processing based on environment.
Workers and work queues work-queue.xml “The Work Queue Configuration File” on page 96
Work queue configuration parameters config-xml Configuration Guide
Note: The hash mark in front of workqueue (#workqueue) indicates that the value that follows the hash mark is a
server role and not a server ID.
Attribute defaultServer requires a value. There is no default. The ClaimCenter server refuses to start if you do not
provide a value for this attribute. The server also refuses to start if you set defaultServer to a role that does not
exist in <registry> element in config.xml.
The <work-queue> subelement has attributes to configure a named work queue in general. The <worker>
subelement has attributes to configure worker tasks on specific servers. You can declare as many workers as you
want for a work queue by specifying on which servers the workers run.
Access work-queue.xml in Guidewire Studio at the following location:
configuration→config →workqueue
See also
• “General Work Queue Configuration” on page 97
• “Worker Configuration” on page 98
Attribute Description
Required attributesdon't
progressinterval The progressinterval value is the amount of time, in milliseconds, that
ClaimCenter allots for a worker to process the number of batchsize work
items. If the time a worker has held a batch of items exceeds the value of p
rogressinterval, then ClaimCenter considers the work items to be
orphans. ClaimCenter reassigns orphaned work items to a new worker
instance. The progressinterval value must be greater than the time to
process the slowest work item, or that work item never completes.
Guidewire recommends that you set the progressinterval value greater
than the processing time for an entire batchsize of work items:
• If a worker takes more time than the time specified by progressinterv
al to processes its assigned work items, ClaimCenter reverts the
remaining work items to available from checkedout.
• If many worker batches take longer than the time specified by progres
sinterval, the repeated checking out and reverting to available of
work items can have a negative impact on performance.
Attribute Description
workQueueClass (Required) The workQueueClass value must be one of the following:
• A Guidewire-provided work queue class listed in the base configuration
version of work-queue.xml
• A custom work queue class derived from Gosu class WorkQueueBase
You cannot configure Guidewire-provided batch processes or custom batch
processes derived from the Gosu class BatchProcessBase.
Optional attributes
blockWorkersWhenWriterActive If the work queue workers start execution before the work queue writer
completes writing work items to the work queue, it can possibly cause
performance issues under certain circumstances.
If set to true, ClaimCenter blocks the work queue workers from acquiring
new work items until the writer completes writing work items to the
queue. After the writer completes writing any new work items, the workers
automatically start acquiring work items again.
The default is false. Only enable this attribute for the work queues for
which you require this capability. Guidewire recommends that you
consider setting this attribute to true if the work queue writer can run for
extensive periods of time due to the work load generated.
logRetryableCDCEsAtDebugLevel If the value of logRetryableCDCEsAtDebugLevel is set to true for a work
queue, ClaimCenter logs any retryable Concurrent Data Change Exception
(CDCE) at the DEBUG level. The log message includes a prepended string to
indicate that the error is non-fatal.
ClaimCenter logs any CDCE that pushes the retry count over the value of re
tryLimit, or the value of workItemRetryLimit if retryLimit is not set, at
the ERROR level.
retryInterval How long in milliseconds to wait before retrying a work item that threw an
exception. The default value is 0, meaning ClaimCenter retries processing
the item immediately.
retryLimit The number of times ClaimCenter retries a work item that threw an
exception or that became an orphan for this work queue.
If you do not specify a retryLimit value for a work queue, ClaimCenter
uses the value of the WorkItemRetryLimit configuration parameter in con
fig.xml as the default value.
IMPORTANT: Guidewire generally recommends that you increase, never
decrease, the number of retries for a work queue.
Worker Configuration
The use of the <worker> element in work-queue.xml is optional. However, in actual practice, it is necessary for
there to be at least one <worker> element for each <work-queue> element for the work queue to operate properly.
The <worker> element contains an instances attribute that has a default value of 1. Without a <worker> element to
provide this default, the processing logic does not allocate any workers for the work queue.
All of the following attributes are optional.
Attribute Description
instances The number of workers to create. By default, ClaimCenter sets the values of this attribute to 1.
If a worker wakes up and detects work items, it checks out those work items from the work queue. If
there are more work items than the value specified by the batchsize attribute, the worker starts another
worker. Each new worker checks out up to the maximum batchsize number of work items. If there are
more work items remaining, the new worker starts another worker. The creation of workers continues
Attribute Description
until the number of workers reaches the maximum limit of workers as specified by the instances
attribute.
maxpollinterval How often a worker wakes up automatically and queries for work items, even if the worker receives no
notification. You might need to increase the value of maxpollinterval to prevent excessive numbers of
queries for work items. The default value of maxpollinterval is 60,000 milliseconds.
throttleinterval The delay between processing work items in milliseconds. The value controls how long the process
sleeps. A value of 0 (zero) means worker tasks process work items as rapidly as possible. To reduce the
CPU load, set the value of throttleinterval to a positive non-zero value.
batchsize How many work items the worker attempts to check out while searching for more work items. Larger
batch sizes are more efficient, but might not result in good load distribution. The default value for batchs
ize is 10.
See also
• For information about the definition of the env and the serverid values in the cluster registry in config.xml,
see “Understanding the Configuration Registry Element” on page 46.
Note: The hash mark in front of workqueue (#workqueue) indicates that the value that follows the hash mark is a
server role and not a server ID.
The workqueue role is merely the default role, however. You are free to create and assign new work queue
management roles. You can also use server roles to enable or disable certain work queues on a specific ClaimCenter
server.
To add a specialized work queue role, say, one to use in managing activity work queues, you need merely to add the
new server role to the list of roles:
See also
• “Defining a New Server Role” on page 50
<?xml version="1.0"?>
<work-queues xmlns="http://guidewire.com/work-queue" defaultServer="#workqueue">
<work-queue workQueueClass="com.guidewire.pl.domain.escalation.ActivityEscalationWorkQueue" … >
<worker server="#activityworkqueue"/>
</work-queue>
<work-queue workQueueClass="com.guidewire.pl.domain.geodata.geocode.GeocodeWorkQueue" … >
<worker/>
</work-queue>
</work-queues>
In this example:
1. If a server has the workqueue role only, then that server:
a. Starts an executor for the GeocodeWorkQueue work queue.
b. Does not start an executor for the ActivityEscalationWorkQueue work queue.
2. If a server has the activityworkqueue role only, then that server:
a. Starts an executor for ActivityEscalationWorkQueue work queue.
b. Does not start an executor for the GeocodeWorkQueue work queue.
3. If a server has both the activityworkqueue and workqueue roles, then that server starts executors for both
work queues.
4. If a server has neither the activityworkqueue nor the workqueue role, then the server does not start an
executor for either of these work queues.
See also
• “Assigning Server Roles to ClaimCenter Cluster Servers” on page 50
For each completed work queue that it finds, Process Completion Monitor:
• Determines if all the work items in that work queue have either completed or failed.
• Calls the IBatchCompletedNotification plugin implementation on a process if the process is complete and has
no remaining available or checked-out work items.
• Sets ProcessHistory.NOTIFICATIONSENT to true to invoke the IBatchCompletedNotification plugin
implementation a single time only for any given process.
The IBatchCompletedNotification interface has a completed method that you can override to perform specific
actions if a work queue or batch process finishes a batch of work. The parameters of the completed method are the
ProcessHistory entity and the number of failed items. ClaimCenter considers work queue processing as complete
if no work items remain on the queue, other than work items that failed. ClaimCenter considers a batch process as
complete if the process stopped and its process history is available.
See also
• “Schedule the Process Completion Monitor Batch Process” on page 101
• “Implement the IBatchCompletedNotification Interface” on page 101
• “Register a Custom Batch Notification Plugin” on page 102
• “Process Completion Monitor Batch Processing” on page 121
Procedure
1. In the ClaimCenterStudio Project window, expand configuration→config→scheduler.
a. Open scheduler-config.xml.
b. Add the following <ProcessSchedule> element:
<ProcessSchedule process="ProcessCompletionMonitor">
<CronSchedule minutes="*/5"/>
</ProcessSchedule>
Next steps
See also
• “Understanding a Work Queue Schedule Specification” on page 93
Procedure
1. In the ClaimCenterStudio Project window, expand configuration→gsrc.
package myCompany.plugin.workqueue
uses gw.plugin.workqueue.IBatchCompletedNotification
construct() { }
//do something
return
}
}
Procedure
1. In the ClaimCenterStudio Project window, expand configuration→config→Plugins.
2. Right-click registry and click New→Plugin.
3. In the Plugin dialog, enter IBatchCompletedNotification for the name of your plugin.
4. In the Plugin dialog, click …
5. In the Select Plugin Class dialog, type IBatchCompletedNotification and select the IBatchCompletedNotification
interface.
6. In the Plugin dialog, click OK.
Studio creates a GWP file under Plugins→registry with the name that you entered.
7. Click the Add Plugin icon and select Add Gosu Plugin.
8. For Gosu Class, enter your class, including the package.
9. Save your changes.
Next steps
See also
• Configuration Guide
Activity Escalation batch processing finds activities that meet certain escalation criteria and marks the activity for
escalation. The batch processing logic looks for activities that meet each of the following criteria:
• The activity has an escalation date.
• The escalation date is prior to today’s date.
• ClaimCenter has not previously escalated the activity.
If Activity Escalation batch processing finds an activity that meets all the criteria, it marks the activity as escalated
and calls the activity escalation rules to determine any actions.
If you set your escalation deadline in days, then there is no reason to run activity escalation more than daily.
However, if your escalation deadline is shorter, then run this process more frequently to take action on overdue
activities in a timely manner. By default, ClaimCenter runs Activity Escalation batch processing every 30 minutes.
As indicated, you can change this schedule as needed.
By default, Guidewire disables Activity Escalation batch processing the base configuration by setting the number of
workers in work-queue.xml to 0. You must set the number of workers in this file to 1 or more to be able to run this
process.
See also
• “Configuring Work Queues” on page 96
• Application Guide
• Configuration Guide
• Rules Guide
AddressDeleteWorkQueue is an internal work queue that ClaimCenter uses to delete orphaned addresses. During
every bundle commit, ClaimCenter identifies addresses that are potentially orphaned as a result of that bundle
commit. Potentially orphaned addresses fall into the following categories:
• It is a newly inserted address that does not have any entity pointing to it.
• It is the original address on an entity, which now contains an updated address.
For each identified potential orphan, the work queue creates an AddressDeleteWorkItem work item and associates
it with public ID of the address. As the work queue processes a work item, it attempts to delete the corresponding
address row from the Address table. If there are any remaining foreign key references to the address from any other
table, the delete operation fails silently. The work queue then continues to process any remaining work items.
During a bundle commit, ClaimCenter calls a writer that creates the actual work items. The writer creates the work
items such that they only become available for processing one minute after creation, by default. This delay gives a
user time to associate a newly orphaned address with a different entity, if desired. It is possible to configure this time
delay through configuration parameter AddressDeletionDelay in file config.xml.
See also
• Configuration Guide
Aggregate Limit Calculations batch processing forces a recalculation of the aggregate limits stored in the
ClaimCenter database. (An aggregate limit is the maximum financial amount that an insurer must pay on a policy or
coverage during a given policy period.) The process then repopulates the database tables with this updated data. Run
Aggregate Limit Calculations batch processing only if you encounter consistency check failures and cannot identify
the reason for the inconsistency.
By default, Guidewire sets the number of workers in work-queue.xml to 2 in the base configuration. However,
Guidewire recommends that you use substantially more workers with Aggregate Limit Calculations batch
processing. Using just a few workers in a large database can take a very long time.
The optimal number of workers to use varies according to the available hardware and the volume of the data
involved. It is also possible to allocate work queue workers to several different ClaimCenter servers rather than
simply increasing the number of workers on a single server.
You can run Aggregate Limit Calculations batch processing in the following ways:
• From the ClaimCenter Server Tools Batch Process Info screen available to administrators.
• From a command prompt using the maintenance_tools -rebuildagglimits command option. If using the
maintenance_tools command, the following command options are also useful:
See also
• “Run a Writer or Batch Process from the Command Prompt” on page 91
• “Configuring Work Queues” on page 96
• “Maintenance Tools Command” on page 418
Archive batch processing archives claims. For a claim to be eligible for archiving, the server time must have reached
the Claim.DateEligibleForArchive date for the claim.
Archive batch processing makes large changes to database tables. After running Archive batch processing,
Guidewire recommends that you update database statistics. Updating database statistics enables the optimizer to pick
better queries based on more current data.
See also
• “Understanding Database Statistics” on page 279
• Application Guide
• Integration Guide
You run this work queue once to find all references from any archived documents to any object instances in the
entity graph. This work queue creates a table of archived objects to make it faster to make this determination.
In ClaimCenter, this work queue also adds ClaimInfo to all claims, including archived claims.
See also
• For a full description of the ArchiveReferenceTrackingSync work queue, see the Configuration Guide.
Bulk Invoice Escalation batch processing updates bulk invoices that meet the following criteria:
• A status of Awaiting submission
• A send date of the current date or later
This process transitions the status of each retrieved bulk invoice to Requesting. It also escalates all the checks
associated with the items on the invoice to Submitting status, as long as their PendEscalationForBulk fields are
true.
Categories UIRunnable
Class BulkInvoiceWorkQueue.java
Bulk Invoice Submission batch processing processes bulk invoice items for bulk invoice submission.
Processing each item consists of creating the placeholder check on that item's associated claim. After the approval of
a bulk invoice, Bulk Invoice Submission batch processing runs against the bulk invoice and creates a work item for
each bulk invoice item. Then, batch processing workers pick up each item and process that work item's bulk invoice
item.
Note: If a bulk invoice remains in the PendingItemValidation status and all workers have finished, run the Bulk
Invoice Submission batch process again.
Bulk Invoice Workflow Monitor batch processing transitions the status of each bulk invoice to one of the following:
• Awaiting submission – The bulk invoice contains only approved checks.
• Invalid bulk invoice – The bulk invoice contains rejected checks.
Bulk Purge batch processing deletes records through table deletes and by notifying the IArchiveSource plugin
implementation to delete archived claims. This process looks for ClaimInfo objects marked as retired and then
traverses the archive domain graph to delete entities, presumably already retired, related to the retired claim.
See also
• “Purging Claim Data” on page 268
Catastrophe Claim Finder batch processing finds possible claims related to a catastrophe. For each claim that it find,
Catastrophe Claim Finder creates an activity to review the claim. The purpose of the activity is to determine if there
is an association between the claim and the catastrophe.
By default, this work queue operates on claims that meet the following criteria:
• The claim must not be already associated with a catastrophe.
• The claim loss date must be within the effective date range of the catastrophe.
• The claim loss type and loss cause must match the coverage perils for the catastrophe.
• The claim must not have skipped or completed the catastrophe_review activity.
• The claim must be active (not retired).
You can modify which claims the work queue operates on by changing the findClaims() method in
GWCatastropheEnhancement.gsx. Access this enhancement in Studio at configuration→Classes→gw→entity.
IMPORTANT Catastrophe Policy Location Download batch processing requires integration with a policy
administration system. If that policy administration system is not Guidewire PolicyCenter, you must modify the
batch processing code to work with that system. Guidewire strongly recommends that you modify Gosu class
CatastrophePolicyLocationDownload rather than base class CatastrophePolicyLocationDownloadBase.
Catastrophe Policy Location Download batch processing looks for catastrophes for which the last modified date is
later than the date of the last download of policy locations. For each catastrophe that qualifies, the process updates
the policy locations in the modified area in two phases:
1. Location Query Phase – Queries the policy administration system for locations within the area of interest
through the PolicyLocationsSearchAPI web service that PolicyCenter publishes
2. Claim Matching Phase – Matches policy locations downloaded from the policy administration system with
policies on claims that already are in ClaimCenter
Note: Some, possibly many, policy locations can fail to match existing claims. The usual case is that there are
many more policy locations than claims in an area of interest.
• Integration Guide
IMPORTANT You can run Claim Contacts Calculations batch processing only if the ClaimCenter server is in
maintenance mode.
In the base configuration, ClaimCenter de-normalizes several contact fields into different objects for performance
reasons. Claim Contacts Calculations batch processing recalculates the denormalization fields in case the fields ever
fall out of synchronization.
This process updates the following fields:
• Exposure.ClaimantDenorm
• Claim.ClaimantDenorm
• Claim.InsuredDenorm
• CheckPayee.PayeeDenorm
• Recovery.PayerDenorm
Only run this process if you encounter consistency check failures. The server must be at the maintenance run level to
run Claim Contacts Calculations. You cannot run this process from the ClaimCenter user interface or schedule the
process. You can only run Claim Contacts Calculations by calling MaintenanceToolsAPI.
See also
• “Place the Server in Maintenance Mode” on page 63
• “Run a Writer or Batch Process from the Command Prompt” on page 91
Claim Exception batch processing runs claim exception rules on the following types of claims:
• The claim contains updates since the process last ran.
• The exposures, transactions, claim contacts, claim contact roles, or matters associated with the claim contain
updates since the process last ran.
Claim Exception batch processing also runs claim exception rules on claims created since Claim Exception
processing last run.
Configuration parameter SeparateIdleClaimExceptionMonitor controls whether Idle Claim Exception and Claim
Exception run independently or together. If SeparateIdleClaimExceptionMonitor is false, then Idle Claim
Exception and Claim Exception run claims concurrently through claim exception rules for recently modified and
idle claims.
ClaimCenter sets configuration parameter SeparateIdleClaimExceptionMonitor to true in the default
configuration. If you set the parameter to false, also disable the schedule for running Idle Claim Exception batch
processing.
Because running claim exceptions is processing intensive, Guidewire recommends that you schedule Claim
Exception batch processing during periods of low activity.
See also
• “Idle Claim Exception Batch Processing” on page 119
• Rules Guide
Claim Health Calculations batch processing calculates health indicators and metrics for all claims for which no
calculated metrics exist. The value of MaxClaimResultsPerClaimHealthCalcBatch configuration parameter in
config.xml determines the maximum number of claims that Claim Health Calculations processes in a single run.
Guidewire recommends that you run Claim Health Calculations batch processing in the following circumstances:
• After an upgrade from a version of ClaimCenter that did not have Claim Metrics, meaning ClaimCenter 6.0 or
earlier.
• Optionally, after staging table loading of claims into the database.
Bulk Claim Validation batch processing also forces the creation of Claim Health Metrics on the claims that it
processes if metrics do not exist. Therefore, if you run Bulk Claim Validation batch processing for each batch of
claims loaded through staging tables, you do not need to run Claim Health Calculations batch processing.
Claim Health Calculations does not update metrics on claims that already have metrics. Guidewire provides a
sample claim exception rule that you can use to force creation of new claim metrics, exposure metrics, and claim
indicators on every claim. The name of the sample rule is CER04000 - Recalculate claim metrics.
See also
• “Claim Validation Batch Processing” on page 110
• Rules Guide
• Configuration Guide
Categories APIRunnable
Workers 0
Schedule Not schedulable
Class ClaimValidationWorkQueue.java
Claim Validation batch processing creates work items to schedule loaded claims for validation. By default,
Guidewire disables Claim Validation batch processing in the base configuration by setting the number of workers in
work-queue.xml to 0. You must set the number of workers in this file to 1 or more to be able to run this process.
Parameter ID is a String value that identifies the conversion batch process that imported the claims. This value is
available through the TableImportResult object returned from a table import operation. For example:
The value of loadCommandPublicID is the public ID from table CC_LOADCOMMAND of the claims that were loaded
during an import process. You can use a SQL query similar to the following to determine the public ID.
ClaimCenter (SPM) Completed Review Sync batch processing transmits completed reviews of service providers to
Guidewire ContactManager. ContactManager uses this information to construct a ReviewSummary object. After the
process submits a review to ContactManager, the process sets the AddressBookUID field on the ClaimCenter Review
object, indicating the transmission of the review.
See also
Application Guide
Contact Retire batch processing checks for contacts that are potentially eligible for retirement and removes them. On
bundle commit, ClaimCenter checks for any entities that have a foreign key link to the contact and creates
ContactRetireWorkItem objects for any contacts being affected by the following:
• The owning entity selected for retirement.
• The property being changed, and a new value being set
As Contact Retire batch processing runs, it calls the ContactRetireHelper.retireContact() method to attempt
to retire the contact referenced by each ContactRetireWorkItem.
See also
• Guidewire Contact Management Guide
ContactAutoSync batch processing synchronizes contact information between Guidewire ContactManager and
Guidewire ClaimCenter. You can configure ClaimCenter to synchronize contact information whenever a user
updates a contact, or you can schedule the ContactAutoSync work queue to synchronize contact information at a set
time.
See also
• “The Work Queue Scheduler” on page 93
• Guidewire Contact Management Guide
Dashboard Statistics batch processing recalculates the statistics for data shown in the ClaimCenter Dashboard tab that
is available to supervisors only.
See also
• Application Guide
Categories APIRunnable
Data Distribution batch processing generates data on the distribution of various items in the ClaimCenter database. It
is not possible to schedule this process. You must run this process from the Data Distribution screen of the Server
Tools Info Pages or by using the maintenance_tools command line utility.
As this type of batch processing can be very resource intensive, it has the possibility of adversely affecting the
performance of the application. Before you run this process in a production environment, Guidewire recommends
that you run the process first against a test environment that contains a full copy of production data.
See also
• “Data Distribution” on page 372
• “Maintenance Tools Command” on page 418
• Integration Guide
Categories Schedulable
Database Consistency Check batch processing runs consistency checks on the ClaimCenter database.
Use the Server Tools Info Pages→Consistency Checks screen to launch the checks from ClaimCenter. You can set the
number and type of workers to use in running the consistency checks through this screen.
Alternatively, to schedule consistency checks, use the following system_tools command, adding the optional
information on which checks to run against which tables:
See also
• “Work Queues” on page 86 for a discussion of how ClaimCenter handles work queues.
• “Database Consistency Checks” on page 262 for an overview of database consistency checks
• “Consistency Checks” on page 361 for details of the Consistency Checks screen in ClaimCenter
• “Command Prompt Tools” on page 413 for an explanation of command prompt options
Categories Schedulable
Database Statistics batch processing generates database statistics about how the ClaimCenter application and data
model interact with the physical database. For example, database statistics store row counts in a table, how a table
distributes the data, and much more. A database management system uses statistics to determine query plans to
optimize performance.
IMPORTANT Do not run or schedule this process if you set <databasestatistics> attribute
useoraclestatspreferences to true in file database-config.xml.
Development Mode
In development mode, it is possible to run Database Statistics batch processing in any of the following ways:
• From a command prompt, using the -updatestatistics option of the system_tools command
• From the Execution History tab of the Server Tools Database Statistics screen
• As a scheduled batch process
Production Mode
In production mode, it is possible to run Database Statistics batch processing in the following ways only:
• From a command prompt, using the -updatestatistics option of the system_tools command.
• As a scheduled batch process
Guidewire recommendation
Guidewire specifically recommends that you collect full statistics in the following circumstances:
• If there are significant changes to data such as after a major upgrade.
• If using the table_import -integritycheckandload or zone_import command.
• If you are trying to troubleshoot performance problems.
In all other cases, Guidewire recommends that you collect INCREMENTAL database table statistics only.
See also
• “Understanding Database Statistics” on page 279
• “Managing Database Statistics using System Tools” on page 282
• “Database Statistics” on page 373
• “System Tools Command” on page 422
Categories APIRunnable
Deferred Upgrade Tasks batch processing creates the nonessential performance indexes and indexes on archived
entities.
ClaimCenter runs Deferred Upgrade Tasks batch processing automatically after an upgrade if you set the following
attribute on <upgrade> in database-config.xml to true:
defer-create-nonessential-indexes
If DeferredUpgradeTasks batch processing fails, manually run the batch process again during non-peak hours. To
manually run the Deferred Upgrade Tasks batch processing, use the following command:
Note: To run this command, you must have permission to create indexes until after the DeferredUpgradeTasks
batch process completes.
To check the status of the DeferredUpgradeTasks batch processing, review the upgrade logs and the ClaimCenter
Server Tools Upgrade and Versions screen.
Production Mode
Do not go into full production while the Deferred Upgrade Tasks process is still running. The lack of so many
performance-related indexes can likely make ClaimCenter unusable.
Until the Deferred Upgrade Tasks batch process has run to completion, ClaimCenter reports errors during schema
validation while starting. These include errors for column-based indexes existing in the data model but not in the
physical database and mismatches between the data model and system tables.
Do not use the ClaimCenter archiving feature until the Deferred Upgrade Tasks batch processing completes
successfully.
See also
• “Upgrade and Versions” on page 393
• “Maintenance Tools Options” on page 418
• Upgrade Guide
Categories Schedulable
This work queue finds all PersonalDataContactDestructionRequest objects that have a status set to New or
ReRun (category ReadyToAttemptDestruction). How far the destruction process went for the found contacts is
determined by the ContactDestructionStatus returned from the Destroyer, the class that implements the
PersonalDataDestroyer interface.
The contact destruction status is set to the returned status. If the status is Completed, Partial, or NotDestroyed
(category DestructionStatusFinished), the date of completion is also populated.
An exception is thrown if return status is New or if you try to change the status from a typecode in the
DestructionStatusFinished category.
See also
• Configuration Guide
Encryption Upgrade batch processing upgrades encryption fields for previous entity versions. ClaimCenter supports
the encryption of sensitive data, such as tax IDs, on claim snapshots.
The ClaimSnapshot entity includes an EncryptionVersion integer column. This column stores a number
representing the version of the IEncryption plugin implementation used to encrypt the snapshot fields.
ClaimCenter also stores the current encryption version. If the encryption metadata or plugin algorithm change,
ClaimCenter increments this version number.
The Encryption Upgrade work queue recalculates encrypted field values for any ClaimSnapshot that has a value for
EncryptionVersion less than the current encryption version. ClaimCenter decrypts the encrypted value by using
the original plugin implementation. Then, ClaimCenter encrypts the value with the new plugin implementation.
Finally, ClaimCenter updates the EncryptionVersion of the snapshot to mark it current.
You can adjust the number of snapshots that this process upgrades at one time by modifying the
SnapshotEncryptionUpgradeChunkSize parameter in config.xml. A value of 0 for
SnapshotEncryptionUpgradeChunkSize sets no limit to the number of snapshots upgraded at once.
See also
• Integration Guide
Exchange Rate batch processing uses a class that implements the IExchangeRateSetPlugin interface to populate
exchange rate data in ClaimCenter.
See also
• Application Guide
Categories MaintenanceOnly
IMPORTANT Before you run this batch process, first run Database Consistency Check batch processing and verify
that there are no consistent children failures.
Financials Calculations batch processing recalculates denormalized financial values. ClaimCenter denormalizes
several financial fields into different objects for performance reasons. The Financials Calculations process
recalculates denormalization fields in case they ever become unsynchronized. This process updates the following
entities:
• CheckRpt
• ClaimRpt
• ExposureRpt
Each of these entities stores denormalized financials totals to improve performance if ClaimCenter displays the
entity. These entities are kept current by the normal operation of ClaimCenter. However, issues in a custom
configuration can cause the values to become unsynchronized and cause database consistency checks to fail. To help
address such issues, it is possible for Guidewire Support to ask you to run this batch process.
Never add new claims to ClaimCenter with the web service APIs while the Financials Calculations batch process is
running. This applies to both addFNOL() and migrateClaim() methods.
This batch process can only be run from the command prompt while the server is at the maintenance run level.
Warning
Financials Calculations batch processing recalculates denormalized financial values in two distinct steps:
1. It first zeros out all denormalization fields in all the RPT tables.
2. It then recalculates the denormalization fields in the RPT tables.
If Financials Calculations batch processing does not complete successfully, there is a risk that ClaimCenter can leave
all denormalization (transaction summary) fields in the RPT tables zeroed out. If this is the case, ClaimCenter users
can see this result in the financial summary screens.
Work Queues and Batch Processes, a Reference 117
System Administration Guide 9.0.5
There are multiple reasons that Financials Calculations batch processing can fail:
• Restarting the database server while Financials Calculations batch processing is executing.
• Manually stopping Financials Calculations batch processing before the process completes, perhaps because batch
processing is taking longer then the scheduled maintenance time window.
• Encountering a triangle inconsistency DBCC error for a claim, in which the child objects on the Claim object do
not consistently point to the same claim.
If Financials Calculations batch processing fails for any reason, do the following:
1. Determine the root cause of the batch processing failure and fix it.
2. Rerun Financials Calculations batch processing and ensure that it completes successfully.
Financials Escalation batch processing changes the status of unissued payments that have passed their date
requirements from Awaiting Submission to Submitting. ClaimCenter can then send the unissued payments to the
financial system.
Although the intention for many payments is to produce a check as soon as possible, it is possible to schedule the
issuance of a check for a future date. This is very common, for example, if creating a schedule of recurring
payments.
See also
• Application Guide
Geocode Writer batch processing is the writer for the Geocode work queue. This work queue runs periodically to
update geocoding information on user contact (UserContact) primary addresses and account locations. The
UserContact entity represents a ClaimCenter user.
See also
• “Understanding Geocoding” on page 79
• “Configuring Geocoding” on page 80
Group Exception batch processing runs any defined group exception business rules on all groups in the system.
By default, Guidewire disables Group Exception batch processing in the base configuration by setting the number of
workers in work-queue.xml to 0. You must set the number of workers in this file to 1 or more to be able to run this
process.
See also
• “Configuring Work Queues” on page 96
• Rules Guide
Idle Claim Exception batch processing runs the claim exception rules on open claims that are idle. ClaimCenter
considers a claim to be idle if the exception rules have not been run on it in the number of days specified by the
IdleClaimThresholdDays configuration parameter.
The SeparateIdleClaimExceptionMonitor configuration parameter controls whether Idle Claim Exception and
Claim Exception run independently or together. If SeparateIdleClaimExceptionMonitor is false, then Idle
Claim Exception and Claim Exception run claims concurrently through claim exception rules for recently modified
and idle open claims. SeparateIdleClaimExceptionMonitor is set to true in the default configuration. If you set
the parameter to false, also disable the schedule for running Idle Claim Exception batch processing.
See also
• “Claim Exception Batch Processing” on page 109
• Rules Guide
Idle Closed Claim Exception batch processing runs the claim exception rules on closed claims that are idle.
ClaimCenter considers a claim to be idle if the exception rules have not been run on it in the number of days
specified by the IdleClosedClaimThresholdDays configuration parameter. By default, Guidewire disables Idle
Closed Claim Exception in the scheduler.
See also
• Rules Guide
Categories Schedulable
This work queue finds all PersonalDataDestructionRequest objects that have a status typecode in the
DestructionStatusFinished category and RequestersNotified set to false. Found requests are processed by
sending a notification to all associated requesters, and RequestersNotified is then marked true. If the notification
fails, an exception is thrown and RequestersNotified remains false.
Note: The class that implements this work queue is
PersonalDataDestructionNotifyExternalSystemsWorkQueue. In your implementation, you must verify that the
notification was successful before marking RequestersNotified true.
A method on the PersonalDataDestruction plugin, notifyExternalSystemsRequestProcessed, is called by the
PersonalDataDestructionNotifyExternalSystemsWorkQueue to notify external systems when a personal data
destruction request is completed. The original RequestID is passed to the method, which does nothing by default.
You are expected to implement this method to notify systems of interest. The RequestID is received when the
destruction request is originally created through the PersonalDataDestructionAPI web service.
Note: In the base configuration, the class that implements the PersonalDataDestruction plugin is
CCPersonalDataDestructionSafePlugin.
See also
• Configuration Guide
Categories UIRunnable
IMPORTANT Run Phone Number Normalizer batch processing once only, after upgrading from earlier application
versions to 8.0.0+. Disable Phone Number Normalizer batch processing in a production environment.
Phone Number Normalizer batch processing calls the registered plugin that implements the
IPhoneNumberNormalizer interface. Use Phone Number batch processing to normalize phone numbers after
upgrading from earlier versions of ClaimCenter to 8.0.0+.
Guidewire recommends that you use a substantial number of workers with Phone Number Normalizer batch
processing. Using a small number of workers to normalize the phone numbers in a large database can take a very
long time. The optimal number of workers to use varies according to the available hardware and the volume of the
data involved. It is also possible to allocate workers to several different ClaimCenter servers rather then simply
increasing the number of workers on a single server.
Disable this work queue after the process completes normalizing all old phone numbers by setting the number of
workers in work-queue.xml to 0. You never need run Phone Number Normalizer batch processing more than once,
after an upgrade to 8.0.0+.
See also
• “Configuring Work Queues” on page 96
• Upgrade Guide
Categories APIRunnable
Populate Search Columns batch processing populates denormalized searchColumn columns from their designated
sourceColumn columns.
This process is only available from the maintenance_tools command or from a web service.
See also
• Configuration Guide
Process Completion Monitor batch processing runs at schedulable intervals and examines the ProcessHistory table
for all completed work queues and batch processes.
For each completed work queue that it finds, Process Completion Monitor:
• Determines if all the work items in that work queue have either completed or failed.
• Calls the IBatchCompletedNotification plugin implementation on a process if the process is complete and has
no remaining available or checked-out work items.
• Sets ProcessHistory.NOTIFICATIONSENT to true, to prevent the process from invoking the
IBatchCompletedNotification plugin implementation more than a single time for any given process.
The IBatchCompletedNotification interface has a completed method that you can override to perform specific
actions after a work queue or batch process completes a batch of work.
See also
• “Performing Custom Actions After Batch Processing Completion” on page 100.
Process History Purge batch processing purges batch process history data from the ProcessHistory table. It is
important to periodically delete process history data. A large number of history records in the database can slow
performance during use of the Server Tools Batch Process Info or Work Queue Info screens.
This process uses Gosu class ProcessHistoryPurge to read the value of the BatchProcessHistoryPurgeDaysOld
parameter in config.xml. The process then uses this value to determine the history data to purge.
Note: ClaimCenter also uses configuration parameter BatchProcessHistoryPurgeDaysOld to determine how
many days to retain process history records, which the separate “Work Item Set Purge Batch Processing” on page
129 process removes.
Purge Cluster Members batch processing purges ClusterMemberData entities. This process uses Gosu class
PurgeClusterMembers to read the value of the ClusterMemberPurgeDaysOld parameter in config.xml. The
process then uses this value to determine to purge the ClusterMemberData entities that have a LastUpdate value
prior to the current date minus the value of the ClusterMemberPurgeDaysOld.
Purge Failed Work Items batch processing purges failed work items from all work queues. This process uses Gosu
class PurgeFailedWorkItems to determine which work items to delete.
During a scheduled execution of the batch process, the batch process deletes failed work items that are older than the
last run date of the batch process. It then set the last run date to the current date. Thus, if the scheduled execution of
this batch process is monthly, the process deletes work items that are older than a month only.
If you run this batch process manually and there are work items that are newer than the last run date, the batch
process does not delete them. If you then run the batch process a second time on the same day, the process deletes
work items that are older than the current date. This is the expected behavior.
Purge Message History batch processing purges old messages from the message history table. The
KeepCompletedMessagesForDays parameter in config.xml specifies how many days a message can remain in the
message history table before Purge Message History batch processing removes the message.
Purge Old Transaction IDs batch processing deletes SOAP header transaction IDs generated by systems external to
ClaimCenter. This process uses Gosu class PurgeTransactionIds to read the value of the
TransactionIdPurgeDaysOld parameter in config.xml. The process then purges transaction IDs that have a
creation date prior to the current date minus the value of the TransactionIdPurgeDaysOld parameter.
Guidewire does not schedule this batch process in the base configuration as the table that stores the transaction IDs
takes very little space in the database. Unless there is a constant buildup of these transaction IDs, there is no real
need to continually purge this data. In fact, if you do purge this data, it is then not possible to determine if a new
transaction is a duplicate of a transaction sent by the external system at an earlier date. There are other alternatives to
purging this data. For example, you can partition the table by date.
See the ClaimCenter Integration Guide for information on SOAP headers. Also see
WsiCheckDuplicateExternalTransaction.
Purge Profiler Data batch processing purges profiler data at regularly specified intervals. This process uses the read-
only ProfilerDataPurgeBatchProcess class to read the value of the ProfilerDataPurgeDaysOld parameter in
config.xml. The process then uses the value of this parameter to determine how many days to retain profiler data
before Purge Profiler Data batch processing removes it.
Purge Workflow batch processing purges completed workflows after resetting any referenced workflows. This
process uses Gosu class PurgeWorkflow to read the value of the WorkflowPurgeDaysOld days parameter in
config.xml. The process then uses this value to determine the number of days to retain workflow data before
purging it.
Purge Workflow Logs batch processing purges completed workflows logs. This process uses Gosu class
PurgeWorkflowLogs to read the value of WorkflowLogPurgeDaysOld parameter in config.xml. The process then
uses this value to determine the number of days to retain workflow logs before purging them.
Recalculate Claim Metrics batch processing recalculates claim metrics on claims for which the metric update time
has already passed.
See also
• Configuration Guide
Categories Schedulable
Changing the policy or risk units within the FNOL wizard leads to residual data links that can prevent ClaimCenter
from archiving a claim. Retired Policy Graph Disconnector batch processing identifies and removes those links.
The process calls the RetiredPolicyGraphDisconnector class defined in Gosu class
RetiredPolicyGraphDisconnector to perform the uncoupling of claim and policy. Suppose, for example, that you
added extension properties that produce new links between policy and non-policy entities within a claim. In this
case, you need to add additional logic to the RetiredPolicyGraphDisconnector class to unlink them.
Procedure
1. In Guidewire ClaimCenter, open the FNOL wizard to create a claim.
2. Select a policy for the claim. ClaimCenter attaches the policy to the new claim.
3. Select a different policy for the claim. This action causes the FNOL wizard to retire the previous policy, unlink
it from the claim, and then link the new policy to the claim.
4. Run Retired Policy Graph Disconnector batch processing. The process severs the remaining links between the
claim and the unused policy.
5. Schedule the claim for archiving.
6. Run “Archive Batch Processing” on page 105 batch processing and verify that it is possible to archive the
claim.
Next steps
See also
• Integration Guide
Service Request Metric Escalation batch processing escalates service metrics if they have exceeded an upper limit.
Solr Data Import batch processing tests the operation of the free-text batch load command, especially its embedded
SQL query. Only run Solr Data Import batch processing on development-mode servers.
IMPORTANT Do not run this process in production to load and re-index the Guidewire Solr Extension. Instead, run
the free-text batch load command (batchload) on the host on which the Guidewire Solr Extension resides.
See also
• “Free-text Batch Load Command” on page 315
• Configuration Guide
Statistics batch processing calculates the work activity statistics that ClaimCenter shows in the Dashboard Statistics
screen. The process also calculates aging values on claims and exposures.
Only users with the Supervisor role in ClaimCenter are able to view work activity statistics for the team in the
Statistics screen. This screen shows counts of open claims, activities, exposures, and matters for each user.
By default, this batch process runs every hour at three minutes past the hour. You can schedule this process to run
more frequently. However, monitor system performance for possible negative impact if you change the schedule.
T-accounts Escalation batch processing updates the t-accounts of the payments on a future-dated check that has now
reached its scheduled send date. After this update, the payment amount no longer contributes to future payments.
Instead, the payment contributes to calculations that include payments scheduled for today, such as total payments,
open reserves, remaining reserves, and available reserves.
T-accounts Escalation batch processing updates financials calculations in the event that the schedule for the
Financials Escalation or Bulk Invoice Escalation processes runs after the end of the business day. That way, the
financial calculations are current for the day scheduled to send the check, but, the check is still editable as it has not
yet been escalated. T-accounts Escalation ensures that T-account balances and dependent calculated financials values
are correct between midnight and the first scheduled run of Financials Escalation or Bulk Invoice Escalation for that
day.
Schedule T-accounts Escalation batch processing to run as close to just past midnight as possible and before
Financials Escalation and Bulk Invoice Escalation batch processing runs.
See also
• “Financials Escalation Batch Processing” on page 118
• “Bulk Invoice Escalation Batch Processing” on page 106
User Exception batch processing runs the user exception rule set on all users in the system.
By default, Guidewire disables User Exception batch processing in the base configuration by setting the number of
workers in work-queue.xml to 0. You must set the number of workers in this file to 1 or more to be able to run this
process.
See also
• “Configuring Work Queues” on page 96
• Rules Guide
User Workload Update batch processing updates workload data for ClaimCenter users. Run User Workload Update
any time a change is made that affects existing weighted workload values. In particular, run User Workload Update
batch processing to recalculate workload classifications and assignments to groups after any change to
classifications. Otherwise, it is likely that the stored computed workload weights are out of date.
See also
• Application Guide
Workflow batch processing wakes up at 10 minute intervals and runs workflow worker tasks. Workflow cannot
advance any faster in the background than this schedule.
See also
• Configuration Guide
Work Item Set Purge batch processing purges work item sets from the database. This process uses Gosu class
WorkItemSetPurge to read the value of the BatchProcessHistoryPurgeDaysOld parameter in config.xml. The
process then uses this value to determine the number of days to retain work item sets before purging them.
Note: The BatchProcessHistoryPurgeDaysOld parameter also configures how many days to retain process
history records, which the separate “Process History Purge Batch Processing” on page 122 process removes.
Work Queue Instrumentation Purge batch processing purges instrumentation data for work queues. This process uses
Gosu class WorkQueueInstrumentationPurge to read the value of the InstrumentedWorkerInfoPurgeDaysOld
parameter in config.xml. The process then uses this value to determine how long to retain work queue
instrumentation data.
See also
• “Work Queue Info” on page 352
Unused processes
The Batch Process Type typelist (BatchProcessType.tti) includes a few Guidewire platform processes that
PolicyCenter does not currently use. Guidewire indicates this status by setting the retired flag on the process
typecode to true and placing a line through the typecode the typelist. You can ignore these processes.
Internal processes
PolicyCenter uses some Guidewire batch processes internally. PolicyCenter runs these processes to generate
database performance reports only. You cannot run these processes separately. They are:
• Microsoft DMV Report
• Oracle AWR Report
To improve performance and reliability, you can install multiple ClaimCenter servers in a configuration known as a
cluster. A ClaimCenter cluster distributes client connections among multiple ClaimCenter servers, reducing the load
on any one server. If one server fails, the other servers seamlessly handle its traffic. This topic describes how a
ClaimCenter cluster functions.
See also
• Installation Guide
Cluster Terminology
Guidewire uses the following terminology in discussions involving Guidewire ClaimCenter clusters.
Term Meaning
Host The physical machine on which one or more Guidewire applications run.
Application instance An individual ClaimCenter deployment. It is possible to run multiple application instances on the same
host.
ClaimCenter server The server associated with each application instance. For production environments, Guidewire supports
JBoss, Tomcat, WebLogic, and WebSphere servers. If running multiple servers on the same host, each
server must map to a different physical port.
Server role A categorization of each application instance in the cluster by its function, as defined by its role.
Examples of server roles are ui (manages user interface requests), batch (manages batch processing),
and messaging (manages messaging and message destinations).
You define server roles using the cluster <registry> element in config.xml. See “Server Roles” on page
141 for more information. See also “Understanding the Configuration Registry Element” on page 46.
Cluster A grouping of two or more Guidewire application instances that have a common configuration and
function as an integrated unit. Typically, each server in the cluster has one or more roles or function.
Term Meaning
Most often, if there are multiple servers assigned the ui role, the cluster contains a third-party load
balancer as well.
The individual application instances in the cluster must all connect to a common database.
Cluster member A single application instance within a Guidewire cluster.
Cluster Membership
As a ClaimCenter server joins the cluster, it updates a membership table in the ClaimCenter database. All cluster
servers periodically poll this table to determine cluster membership.
Cluster Availability
To ensure a high degree of availability, Guidewire recommends that the cluster configuration include two or more
servers with each specific server role. You also need to provide ample capacity for running role-constrained items
such as message destinations or batch processes.
Cluster Monitoring
Guidewire provides cluster monitoring screens that are available to those with privileges to view the Server Tools
screens:
• The Server Tools Cluster Members screens provide information on each server in the cluster.
• The Server Tools Cluster Components screen provides information on the components running on a given server.
Also, there are system_tools command options that provide information on cluster members and components in
the ClaimCenter cluster.
mismatch results in ClaimCenter issuing a Concurrent Data Change Exception (CDCE). The user or batch job can
then re-issue a change based on the latest values entered.
See also
• “ClaimCenter Server Configuration” on page 45
• “Cache Management” on page 68
• “Concurrent Data Change Prevention” on page 69
• “Planning a ClaimCenter Cluster” on page 149
• “Server Roles” on page 141
• “Batch Process Prioritization” on page 163
• “Messaging and Startable Service Load Balancing” on page 164
• “Cluster Members and Components” on page 386
• “System Tools Options” on page 423
Cluster Communication
In the base ClaimCenter configuration, ClaimCenter clusters use the following types of transport mechanisms for
sending messages between cluster members:
• Reliable broadcast without replies
• Unreliable fast broadcast without replies
• Reliable unicast with reply
Guidewire provides a default plugin implementation to support each of these transport types. However, it is possible
to implement your own unicast/multicast transport by implementing the corresponding plugin. Guidewire disables
fast broadcast messaging in the base configuration.
Unicast Communications
ClaimCenter clusters use PPP protocol over TCP for direct server-to-server communication. For example, it is
possible for a ClaimCenter Server Tools screen function to create a message request that directly targets a specific
server. In this case, server A, on which the message request originate, sends a unicast message to server B, who
receives and processes the request. ClaimCenter server lease management also leverages unicast communication to
speed up certain actions, such as lease transfers.
Multicast Communications
ClaimCenter clusters leverage the database for distributing broadcast messages.
ClusterBroadcastTransportFactory Provides a single factory method for creating a cluster transport for reliable
broadcast of messages, with no replies. ClaimCenter stores broadcast
messages in the database and then periodically loads any new broadcast
messages onto each node in the cluster. This type of cluster transport
guarantees the delivery order and the reliable delivery of the broadcast
message.
ClaimCenter uses this mechanism for default message broadcast if you do
not enable the ClusterFastBroadcastTransportFactory plugin
implementation.
ClusterFastBroadcastTransportFactory Provides a single factory method for creating a cluster transport for fast
broadcast of messages, with no replies. This type of transport:
• Uses UDP multicast protocol
• Does not guarantee the delivery order or even the actual delivery of
the broadcast message
ClaimCenter typically uses this type of cluster transport for broadcasting
cache eviction notifications to cluster members.
The use of this transport type is optional. In the base configuration,
Guidewire disables the ClusterFastBroadcastTransportFactory plugin
implementation due to its use of the UDP protocol. If you do not enable
the plugin implementation, ClaimCenter uses the ClusterBroadcastTrans
portFactory cluster transport for broadcast messages instead.
ClusterUnicastTransportFactory Provides a single factory method for creating a cluster transport for point-
to-point unicast messages between specific servers in the cluster. The
default plugin implementation uses TCP for the transport protocol.
Guidewire provides internal Java classes for these cluster-related plugin implementations. It is not possible to
modify these Java classes. To see the plugin definitions, open ClaimCenter Studio and navigate to the following
location in the Studio Project window:
configuration→config→Plugins→registry
Plugin Parameters
The cluster plugin implementations that Guidewire provides in the base configuration all support plugin parameters
that you can use to reconfigure the plugin. All plugin parameters are optional. Guidewire provides default values for
each of the plugin parameters. See “Cluster Plugin Parameter Reference” on page 137 for more information.
To define a plugin parameter, you manually add that parameter to the plugin definition in the ClaimCenter plugin
editor. For example, suppose that you want to directly control the number of threads in the thread pool that handle
inbound requests in the ClusterUnicastTransportFactory plugin. In this case, you manually add the poolSize
parameter and value to the plugin definition for ClusterUnicastTransportFactory, using the Studio plugin editor.
Parameter Description
batchesDeleteInterval Average time (in milliseconds) between the execution of a SQL statement that deletes old
message batches from the database. Each server node in the cluster executes this SQL
statement. Therefore, if the cluster installation contains many nodes, Guidewire recommends
that you increase this value.
The default is 60000 milliseconds (1 minute).
batchKeepPeriod Maximum amount of time for ClaimCenter to retain a batch in the database before deleting it.
The default is 600000 (10 minutes).
batchReadInterval Maximum time interval (in milliseconds) between reading and receiving new batches. The
default is 3000 milliseconds (3 seconds).
batchWriteAttempts Maximum number of attempts to write to a batch queue. If the number of consecutive errors
exceeds this threshold, the transport switches to ERROR mode in which each new messages
pops the oldest message out of the in-memory queue. The purpose of this parameter value is
to avoid out-of-memory issues. The default is 30.
batchWriteInterval Maximum time interval to wait (in milliseconds) before ClaimCenter writes, or sends, the
current batch of messages. The default is 2000 milliseconds (2 seconds).
maxOutboundBufferSize Maximum size of outbound buffer (in megabytes). The purpose of this parameter value is to
prevent out-of-memory issues if a transport is having problems writing or sending messages.
The default is 25 megabytes.
preferredBatchDataSize Maximum size of the batch (in bytes). If the size of the current batch (the sum of all of the
message batch sizes) reaches this threshold, ClaimCenter writes, or sends, the current batch of
messages immediately.
This value must be less than or equal to the largest possible integer value supported by your
hardware.
preferrredBatchMessageCnt Maximum number of pending messages allowed in the batch queue. If the number of
messages in the batch queue reaches this threshold, ClaimCenter writes, or sends, the current
batch of messages immediately.
Parameter Description
The value must be less than or equal to the largest possible integer value supported by your
hardware.
receiverPoolSize Number of threads in the thread pool that handle inbound messages. The default is 4.
To set a plugin parameter, you must manually add that parameter to the plugin definition in the ClaimCenter plugin
editor. To access the plugin editor, navigate to the following location in ClaimCenter Studio and double-click the
plugin name:
configuration→config→Plugins→registry
See also
• Configuration Guide
Parameter Description
bindAddr Inet address to which ClaimCenter is to bind. This parameter can be useful if there are
multiple-NIC hosts. The default fallback for this parameter is the first non-loopback interface
found on the host as defined by the NetworkInterface Java API.
bindPort Port number to which ClaimCenter is to bind. This parameter can be useful for server hosts
behind a firewall.
maxMessageSize Maximum allowable size of message. ClaimCenter calculates the default value of this
parameter using the following algorithm:
(Maximum IP datagram size) - (UDP header size) - (IP header size)
The maximum IP datagram size is 65,535. The UDP header size is 8. The IP header size is one of
the following values:
• IPv4 = 20
• IPv6 = 40
Thus, if using IPv6, the default value for this parameter is 65,535 - 8 - 40, which is 65,487.
messageKeepPeriod Time (in milliseconds) to keep messages in memory in order to skip retransmitted messages
and to combine divided messages. The default is one of the following:
• 2 * (maximum retransmit interval)
• 10 seconds, if not using retransmit
messageSalt Integer value that ClaimCenter uses in calculating the sending message checksum. This value
must be the same on all servers in the ClaimCenter cluster. The default is 12345.
multicastAddress Multicast Inet address. The default is 228.8.8.8.
multicastPort Multicast port. The default is 38180.
nodeStatisticsKeepPeriod Time (in milliseconds) to keep node statistics in memory after last activity. The default is
3,600,000 (1 hour).
oldMessagesDeleteInterval Average time between the removal old messages from the memory (in milliseconds). The
default is 1,000.
receiverPoolSize Number of threads in the thread pool that handle inbound messages. The default is 4.
receiverQueueCapacity Thread pool queue capacity. The default is 100.
Parameter Description
retransmitIntervals Comma-separated list of retransmit intervals (in milliseconds). The default is 10000.
sendHeartbeatInterval Time (in milliseconds) between sending heartbeat messages. The default is 30,000 (30
seconds).
ttl Time-to-live (TTL) for multicast datagram packets. The default is 8.
To set a plugin parameter, you must manually add that parameter to the plugin definition in the ClaimCenter plugin
editor. To access the plugin editor, navigate to the following location in ClaimCenter Studio and double-click the
plugin name:
configuration→config→Plugins→registry
See also
• Configuration Guide
Parameter Description
bindAddr Inet address to which ClaimCenter is to bind. This parameter can be useful if there are multiple-NIC
hosts. The default fallback for this parameter is the first non-loopback interface found on the host as
defined by the NetworkInterface Java API.
bindPort Port number to which ClaimCenter is to bind. This parameter can be useful for server hosts behind a
firewall. The default fallback for this parameter is an ephemeral port, a free port above 1024 that is
within a range supplied by the host operating system.
poolQueueCapacity Thread pool queue capacity. The default is 50.
poolSize Number of threads in the thread pool that handle inbound requests. The default is 4.
To set a plugin parameter, you must manually add that parameter to the plugin definition in the ClaimCenter plugin
editor. To access the plugin editor, navigate to the following location in ClaimCenter Studio and double-click the
plugin name:
configuration→config→Plugins→registry
See also
• Configuration Guide
The exact syntax to use in setting system parameters at application server start is dependent on the application server
type. See “Setting JVM Options in ClaimCenter” on page 52 for more information.
ClusterUnicastTransportFactory Server.Cluster.PointToPoint
In the second logging example, the cluster installation again enables both the ClusterFastBroadcastFactory and
ClusterUnicastTransportFactory plugins. In this case, however, the installation provide a value for the
ClusterUnicastTransportFactory.bindPort parameter, which is 53870, defined in the plugin editor for this
plugin.
Server Roles
In general, Guidewire application cluster contains servers (cluster members) that manage the following types of
functionality.
Function Description
Online processing Server interactively manages requests from users logged into Guidewire ClaimCenter.
Background processing Server manages batch process execution, work queue processing, message destination processing,
lease management, and other similar items.
Guidewire categorizes each individual server instance in the cluster by its function, as defined by its role. In the base
configuration, ClaimCenter defines server roles to handle the following functionality. In a typical installation, only
those servers that support external requests such as user input use the ui server role.
It is possible for multiple servers in the ClaimCenter clusters to have the same server role. Servers that have the
same role type typically have similar resource allocations and configuration. Conversely, servers with different
server role types typically have different workloads and allocate their resources differently.
See also
• “Work Queues and Batch Processes, a Reference” on page 103
• “Understanding the Configuration Registry Element” on page 46
• “Batch Process Prioritization” on page 163
You associate a specific server role or host name with a message destination in the Studio message-config.xml
editor. If you do not set a server role for a message destination, the messaging editor shows a default role of
messaging at the top of the screen.
You set a specific role on a ClaimCenter server through one of the following ways:
• Through an option on the gwb runServer command used to start the server from a command prompt.
• Through the registry metadata definitions in file config.xml.
ClaimCenter periodically reviews all destination leases to determine if the leases are still valid and to look for new
leases to acquire. If a lease expires or the lease manager creates a new lease, ClaimCenter again searches for new
messaging destinations to assign.
Guidewire provides a configurable load balancing strategy for those servers with the messaging role.
See also
• “Understanding the Configuration Registry Element” on page 46
• “Messaging and Startable Service Load Balancing” on page 164
• “Messaging Tools Command” on page 420
• Integration Guide
• Configuration Guide
IMPORTANT Each server with the scheduler role must also have configuration parameter SchedulerEnabled set
to true in config.xml.
See also
• “Work Queues” on page 86
• “Work Queues and Server Roles” on page 99
• “Understanding the Configuration Registry Element” on page 46
ui Server Role
Guidewire ClaimCenter uses the ui server role as a placeholder role only. Guidewire ClaimCenter servers typically
operate in conjunction with a non-Guidewire load balancer that manages the user interface transactions.
ClaimCenter distributes web requests to the various cluster members according to the rules specified for the load
balancer. Any cluster server that receives a web request processes that request, regardless of role assignment.
See also
• “Understanding the Configuration Registry Element” on page 46
• “Guidewire ClaimCenter Cluster Installations” on page 134
• “Planning a ClaimCenter Cluster” on page 149
Most background tasks, except batch processes, stop quickly as their units of work are small. The actual task
managers, for example the Batch Process Manager of the Message Destination Manager, do not instantly stop in a
server shutdown. Instead, each lease manager moves to a passive mode in which it does not start new background
tasks and moves to stop or complete any currently running tasks.
After all components stop their background tasks, you can shut down the batch server safely.
ISystemTools.getClusterState()
You can also use the following system_tools command options to gather information about a server and the state
of the server components:
system_tools -components
system_tools -nodes
See “System Tools Options” on page 423 for a discussion of these command options.
This topic discusses ways to implement, manage, and monitor a Guidewire ClaimCenter cluster.
Procedure
1. In your source configuration, for use on all ClaimCenter servers in the cluster, open config.xml for editing.
2. In config.xml, set the following clustering-related configuration parameters appropriately:
ClusteringEnabled
ClusterMemberPurgeDaysOld
ConfigVerificationEnabled
PDFMergeHandlerLicenseKey
3. In config.xml, using the <registry> element, define the following:
a. The set of valid server roles for use in this cluster.
b. The server roles for each individual ClaimCenter server on the cluster.
Note: Guidewire does not require that you use the server <registry> element in config.xml to define the
individual server instances in the cluster. You can also set these values through JVM parameters at server
start up.
4. In config.xml, set the value for KeyGeneratorRangeSize.
5. Create a deployment WAR or EAR file for Guidewire ClaimCenter.
6. Install a ClaimCenter cluster server in the same way that you install a standalone ClaimCenter server.
IMPORTANT If you install multiple ClaimCenter servers on the same host machine, each ClaimCenter server
must run in its own JVM instance.
Result
See also
• “Understanding the Configuration Registry Element” on page 46
• “Cluster Members and Components” on page 386
• Installation Guide
• Configuration Guide
To disable clustering and remove a server from a cluster, set this parameter to false on that server. After the server
is no longer in a cluster, it behaves as any other standalone ClaimCenter server.
See also
• Configuration Guide
Result
After you start the new server, it connects to the cluster and compares its configuration with the cluster configuration
stored in the ClaimCenter database. It performs a checksum of the config.xml file and checks the config
subdirectories. If the configurations differ, the server fails startup and ClaimCenter writes failure messages to the log
file.
IMPORTANT Start the configuration upgrade on a single cluster server and let it fully initialize before starting the
upgrade process on the other cluster members.
Before starting a rolling upgrade, click Start Rolling Upgrade in the Server Tools Upgrade and Versions screen. You can
do this on any server in the ClaimCenter cluster. This action indicates that a rolling upgrade of the individual cluster
members is in progress.
After completing the upgrade of all cluster servers, click Rolling Upgrade Complete in the Server Tools Upgrade and
Versions screen. This action indicates that all servers in the cluster now use the upgrade WAR/EAR file and that the
rolling upgrade process is complete.
See also
• “Performing a Rolling Upgrade” on page 174
• “Unexpected Upgrades” on page 177
Use the following guidelines as you bring up the individual cluster members:
• After the first cluster server completes the upgrade cycle, it is possible to bring up all other servers in the
ClaimCenter cluster in parallel. However, if starting a large numbers of servers causes resource contention, insert
a short interval of time between each server start, for example, 10 seconds.
• As a general rule, start servers that manage back-end processes first. For example, start servers with the batch
and messaging roles before starting servers with the ui role.
See also
• Upgrade Guide
http://localhost:8080/cc/ping
Information about the local server member Its server ID, the server roles assigned to this ClaimCenter
server, and similar information
Information about individual members recognized by the Their server IDs, number of active user sessions on each
cluster server instance, the server roles assigned to each application
instance, date and time of each server start, and similar
information
Information on the components running on each cluster The component state, date and time of each component start,
member and similar information
Information on any component lease failover in progress The date and time of the deadline in which to complete the
failover process
History of each member in the cluster The start and stop times for each cluster member, server
roles, run level, and similar information
From this screen, on any cluster member, you can start or cancel a planned shutdown for any recognized server
instance in the cluster.
See also
• See “Cluster Members and Components” on page 386 for more information on the Server Tools Cluster screens.
• See “Schedule a Planned Cluster Member Shutdown” on page 392 for information on starting or cancelling a
planned cluster member shutdown.
-components Provides information about the components that exist on each ClaimCenter server in the cluster.
Procedure
1. Ensure that the ClaimCenter server is running.
2. Open a command prompt and navigate to the ClaimCenter installation directory:
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Next steps
See also
• “System Tools Command” on page 422
http://server:port/cc/ping?v=2
The ping utility returns the following types of information, depending on various factors, including whether the
server is a production server and whether the server start was successful.
{
"runLevelCode": 50,
"runLevelName": "MULTIUSER",
"runLevelOrdinal": 5,
"serverId": "ClaimCenterServer1",
"uptimeSeconds": 45
}
{
"runLevelCode": 40,
"runLevelName": "NODAEMONS",
"runLevelOrdinal": 3,
"serverId": "ClaimCenterServer1",
"startupException": "java.lang.RuntimeException: Test Startup Exception\n\tat
com.guidewire.pl.system.server.PingServerServletTest.
testInitTabStateJSONObjectShowsStartupException(PingServerServletTest.java:52)\n
\tat... ",
"uptimeSeconds": 40
}
The following code is an example of the ping utility return values if the ClaimCenter production server fails to start.
By default, ClaimCenter does not show the actual exception text and instead replaces the text with <not null>.
{
"runLevelCode": 40,
"runLevelName": "NODAEMONS",
"runLevelOrdinal": 3,
"serverId": "ClaimCenterServerPROD1",
"startupException": "<not null>",
"uptimeSeconds": 40
}
{
"attemptingTransition": {
"fromRunLevelName": "MULTIUSER",
"fromRunLevelOrdinal": 5,
"threadStackTrace": "Thread-142:TIMED_WAITING\n\tat sun.misc.Unsafe.park(Native Method)
\n\tat
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)\n\tat...”,
"toRunLevelName": "NODAEMONS",
"toRunLevelOrdinal": 3
},
"runLevelCode": 45,
"runLevelName": "DAEMONS",
"runLevelOrdinal": 4,
"serverId": "ClaimCenterServer1",
"uptimeSeconds": 6814
}
Note: Use system_tools command options to transition a ClaimCenter server from one run level to another.
{
"runLevelCode": 50,
"runLevelName": "MULTIUSER",
"runLevelOrdinal": 5,
"serverId": "testsrv1",
"uptimeSeconds": 1820
}
The following code is an example of the ping utility return values after starting a planned server shutdown from the
(Server Tools) Info Pages→Cluster Members page.
{
"plannedShutdownStatus": "activated",
"runLevelCode": 50,
"runLevelName": "MULTIUSER",
"runLevelOrdinal": 5,
"serverId":
"testsrv1",
"uptimeSeconds": 1825
}
The following code is an example of the ping utility return values after the server shutdown completes.
{
"plannedShutdownStatus": "ready",
"runLevelCode": 50,
"runLevelName": "MULTIUSER",
"runLevelOrdinal": 5,
"serverId": "testsrv1",
"uptimeSeconds": 1830
}
See also
• “Using the ping Utility with a Production Server” on page 155
• “Set the Server Run Level Through System Tools” on page 61
• “System Tools Command” on page 422
This topic discusses server component lease managers, lease management, and component lease load balancing.
Postponed
Failed
Initially, each component lease starts in the Not Started failover state. If a lease expires, the first lease manager that
discovers the expired lease does the following:
• It sets the lease to the In Progress failover state. After set to this state, the component associated with the lease
cannot run anywhere until there is a resolution of the issue that caused the lease to expire.
• It sets the Retry Failover field in the Server Tools Cluster Components screen to the following value.
CurrentTime + BackgroundTaskFailoverPlugin.FailoverTimeout
If more than one lease manager discovers the expired lease at the same time, only the first lease manager continues
the failover handling. The other lease managers detect that their SQL updates do not change anything and do not
continue the failover process for that lease.
The lease manager that started the failover calls the handleComponentNameFailover method on the
BackgroundTaskFailoverPlugin plugin to determine what to do next with the lease. The method returns one of the
following actions to handle the component lease failover.
Possible Description
actions
Complete The BackgroundTaskFailoverPlugin plugin logic confirms the lease failure and instructs the lease manager to
the failover complete the failover. In this case, the lease manager completes the failover process, either by deleting or
expiring the lease.
Possible Description
actions
Postpone It is possible that the BackgroundTaskFailoverPlugin plugin logic cannot reliably confirm the lease failure. In
the failover this case, it can postpone the failover process by returning an associated action to take and the time duration to
wait before taking that action. The lease manager updates the Retry Failover field in the Server Tools Cluster
Components screen with the following value:
Current Time + FailoverHandlingResult.Duration
After the updated retry failover time expires, the lease manager considers the lease expired and starts the
process of lease failover again.
Dismiss the It is possible that the BackgroundTaskFailoverPlugin plugin logic decides the specified background task did
failover not fail, or, that this particular task requires some manual action. In this case, the BackgroundTaskFailoverPlu
gin plugin logic dismisses or fails the automatic failover of the lease. The lease with its FailoverState set to
Failed remains in the database until there is some kind of manual intervention. The failover process does not
attempt to retry the automatic failover.
Use The BackgroundTaskFailoverPlugin plugin logic returns a failover handled acton. This action instructs the
external lease manager to do nothing with the lease. An external tool either deletes or renews the lease.
tool Calling an external tool to complete the failover can happen in any of the following ways:
• Programmatically calling the SystemToolsAPI.nodeFailed method.
• Programmatically calling the SystemToolsAPI.completeFailedFailover method.
• Clicking the Complete Failover button on the Server Tools Cluster Components screen.
If the cluster member that started the failover does not complete the failover in the specified retry failover time,
another cluster member detect this condition. The second cluster member then restarts the failover.
If at any point the original lease manager for the lease takes action to renew the lease, it does the following:
• It sets the FailOver state for the lease to Not Started.
• It resets the Retry Failover value to null.
At this point, the renewal of the lease resets the automatic failover process and negates any previous failover action
undertaken for the renewed lease.
<batch-process-config xmlns="http://guidewire.com/batch-process-config">
<settings defaultServer="#batch"
startupDelay="0, 5, 15, 90, 180"
startupTimeout="600"
pollInterval="60"/>
<settings defaultServer="#batch"
startupDelay="0, 5, 15, 90, 180"
startupTimeout="60"
pollInterval="10"
env="test"/>
</batch-process-config>
pollInterval Yes Time, in seconds, between polling the database for new available
leases.
ClaimCenter also broadcasts information on new batch process
leases. See “Simple Lease Management Lifecycle for a Batch
Process” on page 160 for more information.
startupDelay Yes Number of seconds the batch process manager has to wait
before starting the next process. The delay is dependant on the
number of already running batch processes on the current
server. In the base configuration, Guidewire sets this value to the
following:
"0, 5, 15, 90, 180"
Strategy Description
Work stealing Periodically transfers a component lease from an over-utilized server to a server that is under-utilized.
Work acquisition Provides under-utilized servers with a chance to acquire a lease on an available component.
To enable or disable the use of each strategy, Guidewire provides the following plugin parameters.
Parameter Description
messageDestinationLoadBalancingMode Manages the load balancing strategies for message destinations.
Each of these plugin parameters can take one of the following values.
Value Description
disabled Disables both the work acquisition and work stealing strategies.
dynamic Enables both the work acquisition and work stealing strategies.
notransfer Enables the work acquisition strategy only.
Guidewire provides a way to deploy configuration changes to each individual server in a ClaimCenter cluster.
Guidewire calls this type of configuration deployment a rolling upgrade, in the sense that upgrade changes move
through the cluster, one server instance at a time. This type of configuration deployment is in contrast to a full
database and application upgrade. A full upgrade requires that you bring down all ClaimCenter servers in the cluster
to complete the upgrade. Typically, a full upgrade includes changes to the ClaimCenter database.
IMPORTANT You cannot use a rolling upgrade to upgrade from a major or minor version of Guidewire
ClaimCenter to another major or minor version of ClaimCenter. In almost every case, a rolling upgrade is not
suitable for Guidewire application patches or maintenance releases. Only if an application patch meets the
compatibility criteria necessary for a rolling upgrade is a rolling upgrade of that patch possible. A rolling upgrade
is not a replacement or substitute for a full application and database upgrade.
Configuration Compatibility
Guidewire permits changes to the following files, file types, and installation folders in ClaimCenter Studio during
configuration deployment to the individual members of a cluster.
plugin It is safe to modify a .gwp file in the root plugin directory, including pointing to a new
implementation class.
Guidewire does not permit the following with respect to plugins in a rolling upgrade:
• Modifications to non-distributed, startable plugins
• Modifications to files in the plugin/Gosu or plugin/shared directories
Changes to plugin implementations can cause individual ClaimCenter instances to have different
versions of an object. Thus, it is possible that ClaimCenter instances running the source configuration
to consider objects create or updated on the target configuration to be invalid.
rules It is safe to perform the following operations on ClaimCenter Gosu rules:
• Add a new rule
• Modify an existing rule, including enabling or disabling the rule
• Rename a rule, which is actually deleting a rule and adding the rule under a different name
Changes to pre-update or validation rules can cause individual ClaimCenter instances to have
different versions of an object. Thus, it is possible that ClaimCenter instances running the source
configuration. to consider the objects created or updated on the target configuration to be invalid.
servlets It is generally safe to make changes to servlets. However, a change to an existing servlet has the
potential to break integration with a third-party product.
IMPORTANT Guidewire recommends that you undertake thorough testing after making changes to
servlet configuration to verify that all product integrations continue to work as intended.
templates It is safe to make modifications to note, email, and document templates as the impact of a change to
a template affects only that template.
typelists Depending on the type of typelist, it is generally safe to add typecodes to an existing typelist or to
add an entirely new typelist. It is also safe to edit the typelist description or change its category. It is
unsafe, however, to make changes to LOB typelists outside of a very narrow context. See “Making
Changes to LOB Typelists in a Rolling Upgrade” on page 171 for details.
Note: If you add a typelist, the typelist shows on servers running the target (new) configuration only.
On servers running the source (old) configuration, the typelist shows as blank. See “Making Changes
to Typelists in a Rolling Upgrade” on page 170.
web It is safe to add or a delete a PCF or to modify an existing PCF.
webservices It is generally safe to make changes to web services. However, a change to an existing web service
has the potential to break integration with a third-party product.
IMPORTANT Guidewire recommends that you undertake thorough testing after making changes to
web services configuration to verify that all product integrations continue to work as intended.
See also
• “Verification of Configuration Compatibility” on page 173
ExposureType typelist typecodes to control the list view or detail view that shows on the screen. Guidewire also
uses the loss type typecodes to control the visibility of menus and submenus within the claim screens in
ClaimCenter.
Because of the interconnections between LOB typelists and ClaimCenter claim screens, Guidewire severely restricts
the kinds of changes that you can make to the LOB typelists in a rolling upgrade. In general, you can only make
changes safely to certain typefilters on the LossType typelist.
See also
• Configuration Guide
If you modify any of these configuration parameters, you must then restart the server and let the ClaimCenter server
upgrade the database.
However, it is possible to affect the functionality and behavior of the New Exposure menu and its submenus in the
same manner by modifying typefilters on the LossType typelist. Changes that you make to the LossType typelist do
not require a database upgrade. Instead, you can implement these changes through a rolling upgrade of the servers in
the ClaimCenter cluster.
In the base configuration, Guidewire provides a LossType typefilter, with the same name, for each associated
configuration parameter. The typelist editor shows these typefilters on the LossType typelist Typelist tab. To affect
the behavior of the New Exposure menu and submenus, add a typekey to a typefilter in a similar manner as adding a
value to the associated configuration parameter.
ClaimCenter merges any changes that you make to one of these typefilter with the typecode list in the configuration
parameter with the same name as the typefilter. In other words, the two lists are additive. For example, suppose that
configuration parameter X contains a typecode list of A, B, C, D. You then modify the LossType typefilter with the
same name and add typecode E. ClaimCenter then uses loss type typecodes A, B, C, D, and E to determine what to
show in the claim screens.
See also
• Configuration Guide
Configurations are different Requires a full server upgrade. Guidewire does not permit a configuration deployment (rolling
upgrade) using the target configuration.
Configurations are identical No upgrade is necessary.
Configurations are compatible Guidewire permits a configuration deployment of these changes.
If a configuration deployment is not possible, the command lists the incompatible or missing files.
If a configuration deployment is in progress, there are two possible configurations active in the cluster. Each
individual server instance in the cluster is using either the source configuration or the target configuration.
The -verifyconfig command option checks for both configurations on the cluster member on which you run the
command and reports which of the configurations is active on this cluster member. If neither configuration is active,
the command reports that a configuration deployment is in progress and that it is not possible to verify the
configuration at this time.
See also
• “System Tools Command” on page 422
Instance An individual ClaimCenter server running in a VM (Virtual Machine) or JVM (Java Virtual Machine) or
stand-alone ClaimCenter server.
Test instance A ClaimCenter instance with the same data model and EAR/WAR build file as that used on a production
instance. The test cluster does not need to have the same number of test instances as the production
cluster. However, there needs to be at least two instances in the test cluster to be able to test the rolling
upgrade process.
Production instance A member of the production cluster accessed and used by external ClaimCenter users.
Next steps
After you complete these steps, continue to “Perform a Rolling Upgrade in a Test Environment” on page 175.
Procedure
1. In file database-config.xml, verify that the <database> element autoupgrade attribute is set to manual (or
non-existent).
If the attribute is missing, the default value for this attribute is manual. The value cannot be full.
2. Create a new EAR/WAR ClaimCenter build that includes all the proposed configuration changes. The build
name must includes some identification such as a date or a version number.
3. Place the EAR/WAR build in a local directory on a test instance.
4. Run the following command option from the command prompt.
system_tools -verifyconfig filepath
This verification process tests if the target configuration is compatible with the source configuration on that
server. If the configuration verification tool indicates that the proposed changes are compatible with the
existing configuration, continue to the next step.
5. Back up the test database.
6. On any server instance in the cluster, navigate to the Server Tools Upgrade and Versions screen and click Start
Rolling Upgrade.
7. Bring down one of the servers in the test environment and deploy the new EAR/WAR file to this server
instance.
8. Bring the test instance back up.
9. Perform user acceptance testing on the test cluster, on both the old and new configurations. .
If testing indicates that there are no major issues in running the two configurations simultaneously, continue to
the next step.
10. Deploy the new EAR/WAR file to all test instances.
11. Navigate to the Server Tools Upgrade and Versions screen of any server instance in the test environment.
12. Click Rolling Upgrade Complete.
This action clears the upgrade flag indicating that a rolling upgrade is in progress. The rolling upgrade of the
new configuration changes is now complete.
13. Perform acceptance testing on all the test instances to verify that ClaimCenter works as intended.
Next steps
If testing indicates that there are no issues with the new configuration running on all test instances, continue to
“Perform a Rolling Upgrade in a Production Environment” on page 176.
Procedure
1. In file database-config.xml, verify that the <database> element autoupgrade attribute is set to manual (or
non-existent).
If the attribute is missing, the default value for this attribute is manual. The value cannot be full.
2. On any instance in the ClaimCenter production cluster, navigate to the Server Tools Upgrade and Versions
screen.
a. Click Start Rolling Upgrade.
b. Verify the checklist of upgrade prerequisites.
c. Click Start Rolling Upgrade.
This action sets a flag in the ClaimCenter database that indicates a rolling upgrade is in progress.
3. Navigate to the Server Tools Cluster Members screen on any server instance.
a. For the instance that you want to shutdown, click Start Planned Shutdown in the Actions column.
b. Set the appropriate shutdown parameters in the Schedule Planned Shutdown screen.
c. Click Schedule Shutdown.
This action schedules a shutdown of the specified instance. All users logged into this ClaimCenter
instance see an on-screen message indicating that a planned shutdown is in progress. After the scheduled
period of time elapses, there are no more user connections to this production instance.
4. Deploy the new ClaimCenter build to the production instance that you shut down.
5. Bring the instance with the configuration build back up.
6. Perform acceptance testing on the production instances to determine if there are any major issues with running
the two configurations in the same cluster.
If testing indicates that there are no major issues with the new configuration in the production cluster, continue
to the next step. If there are issues, repeat the previous steps until you have upgraded all the production
instances with the new configuration build.
7. Perform another round of acceptance testing.
If testing indicates that there are no major issues with the new configuration on the production instances,
continue to the next step.
8. Navigate to the Server Tools Upgrade and Versions screen of any instance in the ClaimCenter production cluster.
9. Click Rolling Upgrade Complete.
This action clears the upgrade flag indicating that a rolling upgrade is in progress. The rolling upgrade of the
new configuration changes is now complete.
10. Perform another round of acceptance testing to ensure that there are no issues with the new configuration.
Unexpected Upgrades
Any time that you deploy a new WAR/EAR file to a ClaimCenter server and restart the server, ClaimCenter assumes
that an upgrade is in progress. To prevent the unexpected upgrade of a server, Guidewire requires that you set an
upgrade flag in ClaimCenter before starting either a full or rolling upgrade.
Guidewire requires the use of this flag to mitigate the risk of accidentally triggering an unexpected upgrade. As a
consequence, however, it is possible to encounter situations in which the ClaimCenter server does not start. In that
case, you must undertake a recovery sequence to return the server to a state in which it can start.
Full Upgrade
For a full upgrade, Guidewire first requires that you click Start Full Upgrade in the Server Tools Upgrade and Versions
screen (on any cluster member). This action signals your intention to perform a full upgrade. ClaimCenter then sets
a database flag to indicate that a full upgrade is in progress. After you complete the upgrade, ClaimCenter deletes
the database flag. You must set the upgrade flag again before starting a new full upgrade.
It is possible to set the full upgrade in progress flag in the following ways as well.
System To set the upgrade flag through system tools, use the following command option:
tools system_tools -startfullupgrade
At least one cluster member must be running in order for you to use this option.
Web To set the upgrade flag using web services, call the SystemToolsAPI web service method startFullUpgrade. At
services least one cluster member must be running in order for you to use this option.
Java system To set the upgrade flag through a Java system property, use the following system parameter to set the expected
property date of the upgrade while starting one of the affected servers:
gwb runServer -Dgw.cc.full.upgrade.intended.date=date
The date parameter is the current date in yyyyMMdd format.
If you encounter a situation in which all cluster members refuse to start because the upgrade flag was not set, you
cannot set the upgrade flag through the server. Instead, you must set the upgrade flag using the Java system
parameter.
Rolling Upgrade
For a rolling upgrade, Guidewire first requires that you click Start Rolling Upgrade in the Upgrade and Versions screen
(on any cluster member). This action signals your intention to perform a rolling upgrade and sets a rolling upgrade
in progress database flag. If you do not set the upgrade flag, ClaimCenter refuses to start a rolling upgrade.
After you complete the upgrade of all servers in the cluster, you must click Rolling Upgrade Complete on the Upgrade
and Versions screen, which removes the upgrade flag. After you do so, it is not possible start a cluster member
running the source (old) configuration.
Guidewire permits a rolling upgrade of the individual members of a ClaimCenter cluster under certain conditions
only. In effect, the source (old) configuration and target (new) configuration must be compatible in very specific
ways.
Thus, during a rolling upgrade, if you mistakenly deploy an incompatible WAR/EAR file to a ClaimCenter server,
you can encounter a situation in which the server does not start. This is true whether you have set the rolling upgrade
in progress flag. In this case, remove the incompatible WAR/EAR file and deploy a compatible WAR/EAR file
before attempting to restart the server.
See also
• “Configuration Compatibility” on page 169
• “Verification of Configuration Compatibility” on page 173
Configuration Changing the value of a general It does not matter whether you initiate a full upgrade from the
configuration parameter, one that is Server Tools Upgrade and Versions screen. In any case, the
neither permanent nor semi-permanent. application server starts without any errors or warnings and
ClaimCenter reads these kinds of changes ClaimCenter updates the Upgrade and Versions screen.
from configuration files.
Security Administration
System Administration Guide 9.0.5
chapter 11
IMPORTANT Computer security and encryption is a complex topic in which network architecture plays a major
role. Use this documentation as a starting point. Guidewire strongly recommends that you also perform
independent research and testing to develop a secure solution for your company network and installed applications.
Guidewire strongly recommends that you deploy ClaimCenter over TLS (Transport Layer Security) for at least the
login and change password pages. Ideally, deploy ClaimCenter entirely under TLS to protect all sensitive
transmitted data.
on the list as the preferred protocol. If that protocol is not available, ClaimCenter tries the subsequent protocols on
the list until the connection either succeeds or fails completely.
The following table lists the available property overrides.
gw.api.system.server.ServerUtil.getEnv() != "PROD"
The canVisit property must evaluate to true for the Activity Patterns Detail screen to be accessible to a ClaimCenter
administrative user. If the server is in development or test mode, the expression evaluates to true and ClaimCenter
allows access to the screen.
Guidewire designs its security infrastructure so that you can add custom permissions, automatically enforce
permissions, and easily map between users, permissions, and actions. This topic explains how to use the
ClaimCenter permission infrastructure to control access to key ClaimCenter objects.
“System Permission Keys” on page 186 System permission keys apply to specific user interface elements or data model
entities.
“Application Permission Keys” on page 186 Application permission keys represent a set of one or more system permissions.
You can view a list of both system and application permission keys in the Guidewire Security Dictionary.
See also
• Configuration Guide
Screen-level Permissions
Screen-level permissions apply to user interface elements, for example, the permission to view the administrative
Server Tools screens. ClaimCenter defines many user interface permissions internally.
In general, screen-level permissions start with the word “view” followed by a reference to the user interface object
they protect. You can add custom screen-level permissions to Guidewire ClaimCenter by extending the
SystemPermissionType typelist.
PCF files define the point at which ClaimCenter calls user interface permissions. It is possible to change this point
by customizing the PCF file that calls it.
Domain-level Permissions
Domain permissions apply to data model entities, such as permission to view Note objects. For example, as a user
attempts to access the summary for a sensitive note, ClaimCenter verifies that the user has the following
permissions:
• Permission to view the Claim screen
• Permission to access that particular note type
Most top-level objects in the ClaimCenter data model have associated domain-level permissions. ClaimCenter
defines all of an object’s domain-level permissions internally. It is not possible to add, remove, or edit domain
permissions. Similarly, ClaimCenter defines the points at which it checks these permissions in internal code and in
page configuration format (PCF) files. You cannot change the internal checks. You can, however, change the point at
which the PCF files calls these checks.
See also
• “The Security Configuration File” on page 188
• Configuration Guide
ClaimCenter does not generate permissions automatically for the subtypes of an entity. You must explicitly add the
entity subtype to security-config.xml for ClaimCenter to generate permissions for that subtype.
UserRole.ttx Defines possible claim user roles. These roles appear on the Users tab of the claim, on the Parties
Involved screen.
You access this permission in code as perm.entity.perm. This syntax has the following meaning:
• entity – The business object or entity on which the permission acts.
• perm – The permission given for this entity.
The attributes on the various elements have the following meanings.
Notice that:
• The security permissions work on a User entity.
• The application permission key is ViewProfiler.
• The handler lists a set of specific system permission types to which the handler grants the user access, if any of
the conditions are met.
To have the ViewProfiler application permission, the user must have an assigned role that contains one or more of
the listed system permissions.
if (perm.User.ViewProfiler) ...
The sample code condition evaluates to true if the current user has an assigned role with either the internaltools
permission or the toolsprofilerview permission.
See also
• “The Security Configuration File” on page 188
• “Wrap Handler Elements” on page 190
• “Object and Optional Object Handler Elements” on page 192
You access this permission in code as perm.entity.perm. This syntax has the following meaning:
• entity – The business object or entity on which the permission acts.
• perm – The permission given for this entity.
The attributes on the various elements have the following meanings.
The following example illustrates a <StaticHandler> element with two cascading <WrapHandler> elements
following it.
<SystemPermType code="internaltools"/>
<SystemPermType code="toolsProfilerview"/>
</StaticHandler>
(perm.System.internaltools OR perm.System.toolsProfilerview)
AND (perm.System.internaltools OR perm.System.toolsProfilerEdit)
AND (permission.System.toolsProfilerwebserivceedit)
For this compound condition to evaluate to true, all of the following conditions must be true:
• The user must have a role that contains either the interntools or toolsProfilerview system permission as
specified in ViewProfiler static handler.
• The user must have a role that contains either the interntools or toolsProfileredit system permission as
specified in the EditProfiler wrap handler.
• The user must have a role that contains the toolsProfilerwebservicesedit system permission as specified in
the EditWebServiceProfiler wrap handler.
Only if the user meets all sets of security criteria does the security handler permit the user to have the specified
application permission (EditwebserviceProfiler).
See also
• “The Security Configuration File” on page 188
• “Static Handler Elements” on page 188
• “Object and Optional Object Handler Elements” on page 192
You access these types of permissions in code as perm.entity.perm(obj). This syntax has the following meaning:
• entity – The business object or entity on which the permission acts.
• perm – The permission given for this object.
• obj – An instance of the business object governed by the permission.
The details of the element attributes and subelements for the <ObjectHandler> and <OptionalObjectHandler>
security handlers are similar to those for <StaticHandler> elements.
See also
• “The Security Configuration File” on page 188
• “Static Handler Elements” on page 188
• “Wrap Handler Elements” on page 190
This topic explains how to use the ClaimCenter permission infrastructure to control access to document and note
objects.
See also
• “Understanding the Object Access Infrastructure” on page 185
• “The Security Configuration File” on page 188
• “Static Handler Elements” on page 188
• “Wrap Handler Elements” on page 190
• “Object and Optional Object Handler Elements” on page 192
See also
• For information on the various security handler elements, see “The Security Configuration File” on page 188.
• For information on permissions, refer to the system permissions area of the ClaimCenter Security Dictionary.
• For information on typelists, refer to ClaimCenter Studio, or the typelists area of the ClaimCenter Data
Dictionary.
<NotePermissions>
<NoteAccessProfile securitylevel="level">
<NoteCreatePermission permission="perm"/>
<NoteDeletePermission permission="perm"/>
<NoteEditBodyPermission permission="perm"/>
<NoteEditPermission permission="perm"/>
<NoteViewPermission permission="perm"/>
</NoteAccessProfile>
</NotePermissions>
The following code sample illustrates the security access levels for the Public security access type.
<NotePermissions>
<NoteAccessProfile securitylevel="public">
<NoteViewPermission permission="noteview"/>
<NoteEditPermission permission="noteedit"/>
<NoteDeletePermission permission="notedelete"/>
</NoteAccessProfile>
</NotePermissions>
Note: ClaimCenter grants access permissions based on the roles assigned to a user only. It is not possible to restrict
Note access based on security zones or groups.
You set a document type by using the document’s Security Type on the user interface or through a Gosu class that you
write. In the base configuration, ClaimCenter provides the following document security types in the
DocumentSecuritytype typelist:
• Sensitive
• Unrestricted
See also
• For information on the various security handler elements, see “The Security Configuration File” on page 188.
• For information on permissions, refer to the system permissions area of the ClaimCenter Security Dictionary.
• For information on typelists, refer to ClaimCenter Studio, or the typelists area of the ClaimCenter Data
Dictionary.
<DocumentPermissions>
<DocumentAccessProfile securitylevel="level">
<DocumentCreatePermission permission="perm"/>
<DocumentDeletePermission permission="perm"/>
<DocumentEditPermission permission="perm"/>
<DocumentViewPermission permission="perm"/>
</DocumentAccessProfile>
</DocumentPermissions>
The following code sample illustrates the security access levels for the Unrestricted and Sensitive security access
type.
<DocumentPermissions>
<DocumentAccessProfile securitylevel="unrestricted"/>
<DocumentAccessProfile securitylevel="sensitive">
<DocumentViewPermission permission="viewsensdoc"/>
<DocumentEditPermission permission="editsensdoc"/>
<DocumentDeletePermission permission="delsensdoc"/>
</DocumentAccessProfile>
</DocumentPermissions>
Note: ClaimCenter grants these permissions based on the user’s roles alone. You cannot restrict document access
based on security zones or groups.
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→Metadata→Typelist.
2. Open DocumentSecurityType.tti.
3. Click the link for DocumentSecurityType.ttx.
4. Click the plus icon next to typecode.
5. Enter a code value for the new security type. The code must be 16 characters or less.
For example, enter subrogation for the code value.
6. Add values for the name and desc (description) of the new permission.
Leave the priority as -1 and retired as false.
7. Save your changes.
Next steps
After completing the preceding steps, perform procedure “Create Custom Permissions for a New Document Security
Type” on page 197.
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→Metadata→Typelist.
2. Open the SystemPermissionType typelist.
3. Click the link for SystemPermissionType.ttx.
4. Click the plus icon next to typecode.
5. Enter a code value for the new permission.
Include the type of action you want the permission to control in the code. The code must be 16 characters or
less. For example, enter viewsubdoc(view subrogation document).
6. Add values for the name and desc (description) of the new permission.
Leave the priority as -1 and retired as false.
7. Save your changes.
Next steps
After completing the preceding steps, perform procedure “Create a Document Access Profile for a New Document
Type” on page 197.
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→security.
2. Open security-config.xml.
3. Add a <DocumentAccessProfile> element to the <DocumentPermissions> element for your document type.
This requires that you set the securitylevel attribute to a document security type defined in the
DocumentSecurityType typelist.
For example, on the <DocumentAccessProfile> element, define the securitylevel attribute as
subrogation, as the following example XML shows:
<DocumentPermissions>
<DocumentAccessProfile securitylevel="unrestricted"/>
...
<DocumentAccessProfile securitylevel="subrogation">
<DocumentViewPermission permission="viewsubdoc" />
<DocumentEditPermission permission="editsubdoc"/>
<DocumentDeletePermission permission="delsubdoc"/>
</DocumentAccessProfile>
</DocumentPermissions>
Next steps
After completing the preceding steps, perform procedure “Rebuild and Redeploy ClaimCenter” on page 198.
Document Access Control Example 197
System Administration Guide 9.0.5
Procedure
1. Rebuild and redeploy your application package file if you are in a production environment.
2. Open a command prompt and navigate to the ClaimCenter installation directory.
3. Run the following command:
gwb genDataDictionary
Next steps
After completing the preceding steps, perform procedure “Add New Security Permissions to the Appropriate Roles”
on page 198.
See also
• Installation Guide
• Configuration Guide
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Add the new permissions to the appropriate roles.
For example, add the new subrogation system permissions to the Subrogation role, or whichever role is
appropriate for your configuration.
3. Repeat “Add New Security Permissions to the Appropriate Roles” on page 198 for the special investigation
permissions adding them to the appropriate role.
4. Add the subrogation and special investigation permissions to the Manager role.
5. Verify that the members of each group have the associated roles.
Next steps
At this point, you can test the new configuration.
See also
• Application Guide
This topic explains how to use the permission infrastructure to control access to ClaimCenter claim objects.
See also
• “Understanding the Object Access Infrastructure” on page 185
• “The Security Configuration File” on page 188
• “Static Handler Elements” on page 188
• “Wrap Handler Elements” on page 190
• “Object and Optional Object Handler Elements” on page 192
See also
• “The Security Configuration File” on page 188
See also
• “The Security Configuration File” on page 188
• “Add a New Claim-related System Permission” on page 202
• “Map Claim Access Types to System Permissions” on page 203
• “Add a New Claim Access Type” on page 203
Procedure
1. Deploy your new security changes and restart the server.
2. Log into Guidewire ClaimCenter.
3. Locate an existing sensitive claim.
4. Change the security level of the claim from Sensitive to a new value.
5. Save the claim.
6. Set the claim back to Sensitive.
7. Save the claim.
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→Metadata→Typelist:
a. Open ClaimSecurityType.tti.
b. Click the ClaimSecurityType.ttx link.
c. Click the plus icon next to typecode.
d. Enter a code value for the new permission.
Include the type of action you want the permission to control in the code, for example, viewvacation.
The code must be 16 characters or less.
e. Add values for the name and desc (description) attributes.
f. Leave the priority attribute as -1 and the retired attribute as false.
g. Save your changes.
2. Rebuild and redeploy your application package file if in a production environment.
3. Update and rebuild the typelist information:
a. In a command prompt window, navigate to the ClaimCenter installation directory
b. Run the following command:
gwb genDataDict
Next steps
See also
• Installation Guide
• Configuration Guide
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→Metadata→Typelist:
a. Open SystemPermissionType.tti.
b. Click the SystemPermissionType.ttx link.
c. Click the plus icon next to typecode.
d. Enter a code value for the new permission.
Include the type of action you want the permission to control in the code, for example, viewvacation.
The code must be 16 characters or less.
e. Add values for the name and desc (description) attributes.
f. Leave the priority attribute as -1 and retired attribute as false.
gwb genDataDict
Next steps
See also
• Installation Guide
• Configuration Guide
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→security:
a. Open security-config.xml.
b. Add an access mapping element that specifies a claim access type and a system permission.
If necessary, copy and modify an existing <AccessMapping> element.
c. Save your changes.
2. Rebuild and redeploy your application package file if you are in a production environment.
3. Update and rebuild the typelist information:
a. In a command prompt window, navigate to the ClaimCenter installation directory
b. Run the following command:
gwb genDataDict
Next steps
See also
• “The Security Configuration File” on page 188
• “Access Mapping Elements” on page 204
• Installation Guide
IMPORTANT For performance reasons, Guidewire does not recommend that you define more than a few custom
claim access types.
Procedure
1. In the Studio Project window, expand configuration→config→Metadata→Typelist:
a. Open claimAccessType.tti.
b. Click the ClaimAccesstype.ttx link.
c. Click the plus icon next to typecode.
d. Enter a value in the code field for the new access type.
The code value must be 16 characters or less.
e. Add values for name and desc (description) attributes to the new security type.
f. Leave the Priority attribute as -1 and the Retired attribute as false.
g. Save your work.
2. Rebuild and redeploy your application package file if you are in a production environment.
3. Update and rebuild the typelist information:
a. In a command prompt window, navigate to the ClaimCenter installation directory
b. Run the following command:
gwb genDataDict
Next steps
See also
• Installation Guide
• Configuration Guide
The following example maps the permission to create new activities on a claim (actcreate) to the create access
type:
Typically, security-config.xml maps multiple permissions to a single claim access type. The default edit access
illustrates this concept.
<AccessProfile securitylevel="level">
<ClaimOwnPermission permission="perm"/>
<SubObjectOwnPermission permission="perm"/>
<ClaimAccessLevels>
<AccessLevel level="level" permission="access_type" />
<DraftClaimAccessLevel level="level"/>
<ClaimUserAccessLevel role="user_role" level="level" permission="access_type" />
</ClaimAccessLevels>
<ActivityAccessLevels>
<AccessLevel level="level" permission="access_type" />
<ActivityAccessLevels/>
<ExposureAccessLevels>
<AccessLevel level="level" permission="access_type" />
</ExposureAccessLevels>
</AccessProfile>
An <AccessProfile> element contains zero to one of the following subelements. See the respective subelements for
a discussion of their attributes and subelements.
See also
• “The Security Configuration File” on page 188
• “Restrict Claim Ownership” on page 210
• “Example: Default Access Profile for Unsecured Claims” on page 210
• “Example: Default Access Profile for Sensitive Claims” on page 211
<ActivityAccessLevels>
<AccessLevel level="level" permission="access_type" />
...
<ActivityAccessLevels/>
See also
• “The Security Configuration File” on page 188
<ClaimAccessLevels>
<AccessLevel level="level" permission="access_type" />
...
<DraftClaimAccessLevel level="level"/>
<ClaimUserAccessLevel role="user_role" level="relationship" permission="access_type" />
...
</ClaimAccessLevels>
ClaimUserAccessLevel level Yes Sets the security type that the claim must have for the user to
access the claim. This must be either a value defined in the ClaimSe
curityType typelist or the string “any”.
permission Yes The claim access types available to a user with the proper role and
level. This must be a value defined in the ClaimAccessType
typelist.
role Yes The role that a user must have to access this claim. This must be a
value defined in the UserRole typelist. User roles show on the claim
Users tab, on the Parties Involved screen.
DraftClaimAccessLevel level Yes Defines the access level necessary to work on a claim while its
status is draft. It has the same meaning as the level attribute for Ac
cessLevel.
WARNING Be careful adding a <ClaimUserAccessLevel> element. Depending on how you define the
element, adding one user through the Users tab can grant access to the membership of entire groups or security
zones.
See also
• “The Security Configuration File” on page 188
<AccessProfile securitylevel="level">
<ClaimOwnPermission permission="perm"/>
...
</AccessProfile>
You are not likely to use this element on a profile for unsecured claims, but it can be useful to control ownership of
secured claims. For example, you can use this element to restrict the users who can own any claims that involve an
employee.
See also
• “The Security Configuration File” on page 188
<ExposureAccessLevels>
<AccessLevel level="level" permission="access_type" />
...
</ExposureAccessLevels>
The attributes on the <AccessLevel> element have the following meanings.
WARNING Be careful adding an <ExposureUserAccessLevel> element. Depending on how you define the
element, adding one user through the Users tab can grant access to the membership of entire groups or security
zones.
See also
• “The Security Configuration File” on page 188
<AccessProfile securitylevel="level">
<SubObjectOwnPermission permission="perm"/>
...
</AccessProfile>
You are not likely to use these elements on a profile for unsecured claims, but they can be useful to control
ownership of secured claims. For example, you can restrict ownership of claims involving an employee.
See also
• “The Security Configuration File” on page 188
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→security:
2. Open security-config.xml.
3. If creating an access profile for a new security type or for a security type that does not yet have an access
profile, do one of the following:
• Create a new <AccessProfile> element using an existing <AccessProfile> element as a template.
• Copy and modify an existing <AccessProfile> element.
4. Create or edit the <ClaimOwnPermission> and <SubObjectOwnPermission> elements.
The following example illustrates how to restrict access to a claim filed by a company employee.
<AccessProfile securitylevel="employeeclaim">
<ClaimOwnPermission permission="ownemployeeclaim"/>
<SubObjectOwnPermission permission="ownemployeeclaimsub"/> ...
</AccessProfile>
These elements specify which system permission a user must have to own the claim or subobjects of the
claim. If the required system permissions do not exist, you must add them to the SystemPermissionType
typelist.
5. Within ClaimCenter, add the system permissions to the appropriate roles.
For example, add the ownemployeeclaim and ownemployeeclaimsub permissions to the Sensitive Claim
Adjuster role.
Next steps
See also
• “The Security Configuration File” on page 188
• “Add a New Claim-related System Permission” on page 202
• “Access Mapping Elements” on page 204
• “Example: Default Access Profile for Unsecured Claims” on page 210
• “Example: Default Access Profile for Sensitive Claims” on page 211
<AccessProfile securitylevel="unsecuredclaim">
<ClaimAccessLevels>
<AccessLevel level="securityzone" permission="view"/>
<AccessLevel level="securityzone" permission="edit"/>
<DraftClaimAccessLevel level="securityzone"/>
<ClaimUserAccessLevel role="subrogationowner" level="user" permission="view"/>
<ClaimUserAccessLevel role="subrogationowner" level="user" permission="edit"/>
<ClaimUserAccessLevel role="reinsmgr" level="group" permission="view"/>
<AccessProfile securitylevel="sensitiveclaim">
<ClaimOwnPermission permission="ownsensclaim"/>
<SubObjectOwnPermission permission="ownsensclaimsub"/>
<ClaimAccessLevels>
<AccessLevel level="group" permission="view"/>
<AccessLevel level="group" permission="edit"/>
<DraftClaimAccessLevel level="group"/>
</ClaimAccessLevels>
<ActivityAccessLevels>
<AccessLevel level="user" permission="view"/>
<AccessLevel level="user" permission="edit"/>
<ActivityAccessLevels/>
<ExposureAccessLevels>
<AccessLevel level="user" permission="view"/>
<AccessLevel level="user" permission="edit"/>
</ExposureAccessLevels>
</AccessProfile>
See also
• “The Security Configuration File” on page 188
• “Access Profile Elements” on page 205
• “Restrict Claim Ownership” on page 210
• “Example: Default Access Profile for Unsecured Claims” on page 210
Static exposure Requires only that you construct an <ExposurePermissions> element in security-config.xml.
security
Claim-based Requires that you map the exposure security type to a system permission, which you then map to a claim
exposure access type. See “Add a Custom Permission” on page 214 for an example of how to construct this type of
security security access.
See also
• “The Security Configuration File” on page 188
• “Exposure Access Control” on page 212
• “Exposure Security Controls Overview” on page 212
• “Exposure Permissions Elements” on page 213
• “Implementing Claim-based Exposure Security” on page 214
<ExposurePermissions>
<ExposurePermission securitylevel="type" permission="perm"/>
</ExposurePermissions>
ClaimCenter considers exposures with an undefined securitylevel as having the null security level. To assign a
permission to these exposures, specify an <ExposurePermission> element without a securitylevel parameter.
Only define one <ExposurePermission> that omits the securitylevel. If you specify more than one,
ClaimCenter assigns the last one encountered to exposures at the null level.
The following example illustrates an <ExposurePermissions> element.
<ExposurePermissions>
<ExposurePermission securitylevel="secured" permission="expeditsec"/>
<ExposurePermission permission="unsecexpedit"/>
</ExposurePermissions>
Note: You can only map each security type single time. Thus, you can map secured exposures to expeditsec, but
not to viewsecexp.
To access a secured exposure (or its related objects), users also need to have the expeditsec permission on at least
one role.
See also
• “The Security Configuration File” on page 188
• “Exposure Access Control” on page 212
• “Exposure Security Controls Overview” on page 212
• “Static Versus Claim-based Exposure Security” on page 212
• “Implementing Claim-based Exposure Security” on page 214
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→Metadata→Typelists.
a. Open SystemPermissionType.tti.
b. Click the SystemPermissionType.ttx link.
c. Click the plus icon next to typecode.
d. Enter abexposures in the code field.
The code value must be 16 characters or less.
e. Add values for name and desc (description) attributes to the new security type.
f. Leave the priority attribute as -1 and the retired attribute as false.
2. In a similar manner, create an abexposures type code in the ClaimAccessType typelist.
It is possible to create a claim access type and an exposure security type with the same typecode.
3. In Studio, expand configuration→config→security:
a. )Open security-config.xml.
b. Create an <ExposurePermissions> element that looks similar to the following in file security-
config.xml.
<ExposurePermissions>
<ExposurePermission securitylevel="abexposures" permission="abexposures"/>
</ExposurePermissions>
d. Create an <AccessProfile> element for security level sensitiveclaim that references this access type,
for example:
<AccessProfile securitylevel="sensitiveclaim">
...
<ExposureAccessLevels>
<AccessLevel level="user" permission="abexposure"/>
</ExposureAccessLevels>
</AccessProfile>
gwb genDataDict
Next steps
See also
• “The Security Configuration File” on page 188
• “Add a New Claim-related System Permission” on page 202
• “Access Mapping Elements” on page 204
• “Access Profile Elements” on page 205
• “Exposure Access Control” on page 212
• “Exposure Security Controls Overview” on page 212
• “Static Versus Claim-based Exposure Security” on page 212
• “Exposure Permissions Elements” on page 213
Database Administration
System Administration Guide 9.0.5
chapter 15
Database Configuration
This topic discusses database configuration file database-config.xml and how to use the file to configure
ClaimCenter database options.
See also
• “Database Best Practices” on page 259
• “Guidewire Database Direct Update Policy” on page 261
• “ClaimCenter Database Back Up” on page 262
• “Database Consistency Checks” on page 262
• “Resize Database Columns” on page 267
• “Purging Unwanted Data” on page 267
• “Understanding Database Statistics” on page 279
• “Understanding Claim Purging” on page 273
• “Configuration Parameters that Affect Claim Search on Oracle” on page 277
• “Oracle Materialized Views for Claim Searches” on page 277
See also
• “The Database Configuration File” on page 220
• “Database Parameters” on page 368
<!-- Sets options for the generation of database statistics at the global, database level -->
<databasestatistics databasedegree="integer" incrementalupdatethresholdpercent="integer"
numappserverthreads="integer" samplingpercentage="integer" useoraclestatspreferences="true|false">
<!-- Sets database statistics options for the named table -->
<tablestatistics action="delete|keep|update" databasedegree="integer" name="string"
samplingpercentage="integer">
<!-- Sets database statistics options for the named column on the named table -->
<histogramstatistics name="string" numbbuckets="integer"/>
</tablestatistics>
</databasestatistics>
<!-- Sets options for the connection pool that Guidewire provides -->
<dbcp-connection-pool jdbc-url="string" max-idle="integer" max-total="integer" max-wait="integer"
min-evictable-idle-time="integer" num-tests-per-eviction-run="integer" password-file="string"
test-on-borrow="true|false" test-on-return="true|false" test-while-idle="true|false"
time-between-eviction-runs="integer" when-exhausted-action="block|fail|grow">
<reset-tools-params collation="string" oracle-tnsnames="string" system-password="string"
system-username="string"/>
</dbcp-connection-pool>
<!-- Sets the data source for a JBoss, Tomcat, WebLogic, or WebSphere application server-->
<jndi-connection-pool datasource-name="string"/>
<!-- Sets performance options for the named table, overrides values set at the database level -->
<loader-table drop-deferrable-indexes="disable|enable|enable_all"
fk-enable-degree-of-parallelism="integer" name="string"
row-counts-degree-of-parallelism="integer"/>
<loader-index key-columns="string"/>
</loader-table>
</loader>
<mssql-db-ddl>
<!-- Sets SQL Server database options at the global, database level -->
<mssql-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW/>
<mssql-filegroups admin="string" index="string" lob="string" op="string" staging="string"
typelist="string"/>
<!-- Set SQL Server options for the named table, overrides values set at database level -->
<mssql-table-ddl table-name="string">
<mssql-index-ddl filter-where="string"index-compression="NONE|PAGE|ROW"
index-filegroup="string" key-columns="string" partition-scheme="string"/>
<mssql-table-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW"/>
<mssql-table-filegroups="string" index-filegroup="string" lob-filegroup
table-filegroup="string"/>
</mssql-table-ddl>
</mssql-db-ddl>
<ora-db-ddl>
<!-- Sets Oracle database options at the global, database level -->
<ora-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>
<tablespaces admin="string" index="string" lob="string" op="string" staging="string"
typelist="string"/>
<!-- Sets Oracle options for the named table, overrides values set at the database level -->
<ora-table-ddl table-name="string">
<ora-index-ddl index-compression="true|false" index-tablespace="string" key-columns="string"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>
<ora-table-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE">
<ora-table-date-interval-partitioning datecolumn="string"
interval="DAILY|MONTHLY|QUARTERLY|WEEKLY|YEARLY">
<ora-table-hash-partitioning hash-columns="string" num-partitions="integer"/>
<ora-table-tablespaces index-tablespace="string" lob-tablespace="string"
table-tablespace="string"/>
</ora-table-ddl>
</ora-db-ddl>
<versiontriggers dbmsperfinfothreshold="integer">
<!-- Sets override options for the named database version trigger -->
<versiontrigger extendedquerytracingenabled="true|false" name="string"
parallel-dml="true|false" parallel-query="true|false"
queryoptimizertracingenabled="true|false" recordcounters="true|false"
updatejoinorderedhint="true|false" updatejoinusemergehint="true|false"
updatejoinusenlhint="true|false"/>
</versiontriggers>
</upgrade>
</database>
File database-config.xml contains a single root-level <database> element that takes the following attributes.
name Required. String identifying the database for which ClaimCenter uses this connection specification.
dbtype Required. Database type, either h2 (for the QuickStart database), oracle, or sqlserver.
The following attributes are all optional.
addforeignkey Used only for development and testing. Do not use this attribute in production. The default is true.
autoupgrade Use to set how to upgrade the database. Valid values are:
• full – Takes precedence and initiates a full upgrade assuming all other necessary conditions are
met.
• manual – Requires that you set either the database upgrade type (in Server Tools Upgrade and
Versions screen) or the date system property.
checker Boolean. Whether ClaimCenter runs consistency checks before it starts:
• Development environments – For development environments with small data sets, you can
enable consistency checks to run each time the ClaimCenter server starts. Set the value of check
er in the database block to true to enable checks on server startup.
• Production environments – Running consistency checks upon server startup can take a long time,
impact performance severely, and possibly time out on very large datasets. Set the value of chec
ker in the database block to false to disable checks on server startup.
Valid values are:
• true – Guidewire recommends that you only set checker to true in development environments
with a small set of test data.
• false – Guidewire recommends that you set checker to false under most circumstances.
The default is true.
See the following for more information:
• “Database Consistency Checks” on page 262
• “Configure Consistency Checks to Run at Server Startup” on page 264
env Use of the env attribute to set a server environment enables you to provide different database
configurations for different server environments. For example, you can set up different database
configurations for a production environment and a test environment.
See “Example Syntax for Registry Server Element” on page 48 for more information.
printcommands Boolean. Whether the server prints database upgrade messages to the console upon startup. Valid
values are:
• true - By default, Guidewire sets the value of printcommands to true in the base configuration.
• false - Do not set printcommands to false in a production environment.
The default is true.
versionchecksonly Boolean. Whether the ClaimCenter server runs only database version checks at startup, without
performing any actual database upgrade steps:
• true - ClaimCenter runs all version checks regardless of a failure in one of the checks.
• false - ClaimCenter stops the upgrade if it detects an error.
The default is false. Changes to this attribute take effect only during an application upgrade.
The <database> element takes the following subelements. There is, at most, a single occurrence of each of these
subelements in the <database> element.
databasestatistics Specifies parameters that control the generation of database statistics. See “The databasestatistics
Database Configuration Element” on page 224 and “Database Statistics Generation” on page 279
for more information.
dbcp-connection-pool Specifies parameters for connection pool shared using dbcp. You must include this subelement if
using a dbcp data source. See “The dbcp-connection-pool Database Configuration Element” on page
225 and the Installation Guide for more information.
jndi-connection-pool Specifies parameters for a connection pool shared using JNDI. You must include this subelement if
using a jndi data source. See “The jndi-connection-pool Database Configuration Element” on page
228 and the Installation Guide for more information.
loader Specifies configuration parameters that affect the loading of data into the ClaimCenter database
using staging tables. See “The loader Database Configuration Element” on page 231 for more
information.
oracle-settings Specifies settings for Oracle databases. See “The oracle-settings Database Configuration Element” on
page 236 and the Installation Guide for more information.
sqlserver-settings Specifies settings for SQL Server databases. See “The sqlserver-settings Database Configuration
Element” on page 237 and the Installation Guide for more information.
upgrade Specifies ClaimCenter behavior during a database upgrade. See “The upgrade Database
Configuration Element” on page 237 for more information.
See also
• Installation Guide
autoupgrade Description
attribute
full Whenever the application server starts, if it determines the need for a full upgrade, the presence of this
attribute set to full is sufficient permission to perform the upgrade. With this setting:
• If a rolling (configuration) upgrade is already in progress as the server starts, the server throws an
exception, to force the choice of an upgrade type.
• If a full upgrade is already in progress by other means as the server starts, there is no issue as this
setting is consistent with the a full upgrade.
manual This setting requires that you explicitly set the permission to upgrade through one of the following means:
• Setting the database upgrade type in the Server Tools Upgrade and Versions screen and initiating the
upgrade from that screen.
• Setting the following Java system property to the current date as the application server starts:
-Dgw.cc.full.upgrade.intended.date
See “Unexpected Upgrades” on page 177 for a discussion of the use of this Java system property.
You must set the value of autoupgrade to manual if performing a rolling (configuration) upgrade.
The following table describes the interactions between setting Start Full Upgrade on the Upgrade and Versions screen and
the value of the autoupgrade attribute during deployment of non-data model changes to a production mode server.
autoupgrade Result
full If you set the value of autoupgrade attribute to full, any attempt to start a rolling upgrade fails. In addition,
you must always set the upgrade type (full) using the Server Tools Upgrade and Versions screen, or, by using the sy
stem_tools -startfullupgrade command option, for example.
manual If you set the value of autoupgrade attribute to manual, you must always set the upgrade type (full or rolling) or
the upgrade fails. You can set the upgrade type in the Server Tools Upgrade and Versions screen or by setting a
JVM parameter at server startup. For a rolling upgrade, the new configuration (database-config.xml) must set
this value to manual. This new value overrides any value set in the old configuration.
Not set If you do not set the value of autoupgrade attribute, ClaimCenter assumes a default value of manual and
behaves accordingly.
<database>
<!-- Sets database statistics options for the named table -->
<tablestatistics action="delete|keep|update|force" databasedegree="integer" name="string"
samplingpercentage="integer">
<!-- Sets database statistics options for the named column on the named table -->
<histogramstatistics name="string" numbbuckets="integer" />
</tablestatistics>
</databasestatistics>
</database>
The following list describes the attributes that you can configure on the <databasestatistics> element. All of
these attributes are optional. See “The Database Statistics Element” on page 285 for more information on these
attributes.
databasedegree On Oracle, this attribute controls the degree of database parallelism that
Oracle uses in executing each individual statement. The default is 1.
ClaimCenter uses the value of this attribute for all statements.
SQL Server ignores the databasedegree attribute.
incrementalupdatethresholdpercent This attribute specifies the percentage of table data that must have
changed since the last statistics process for the incremental statistics
generation batch process to update statistics for the table.
numappserverthreads On both Oracle and SQL Server, the numappserverthreads attribute
controls the number of threads that ClaimCenter uses to update database
statistics for staging tables during import only.
samplingpercentage The behavior of this attribute depends on the database type. For Oracle,
Guidewire recommends that you always set this value to 0 to enable Oracle
auto-sampling.
useoraclestatspreferences On Oracle, this attribute sets the database statistics preferences to be able
to use the Oracle Autotask infrastructure instead of the DBStats batch
process from ClaimCenter. The default is false, which requires that you
disable the Autotask and schedule DBStats batch processing in its place.
Changes to the value of this attribute only take effect during an application
upgrade.
tablestatistics Provides overrides of database-wide statistics settings defined on the <databasestatistics> element for
a specific table. There can be multiple occurrences of the <tablestatistics> subelement on the <databa
sestistics> element.
See also
• “The Database Configuration File” on page 220
• “Understanding Database Statistics” on page 279
• “Configuring Database Statistics Generation” on page 284
• “The Database Statistics Element” on page 285
• “The Table Statistics Database Element” on page 289
• “Database Statistics” on page 373
• “Using Oracle AutoTask for Statistics Generation” on page 287
• “System Tools Options” on page 423
The <dbcp-connection-pool> element has the following syntax. The following code sample shows required
attributes in bold font.
<database>
<!-- Sets options for the connection pool that Guidewire provides -->
<dbcp-connection-pool jdbc-url="string" max-idle="integer" max-total="integer" max-wait="integer"
min-evictable-idle-time="integer" num-tests-per-eviction-run="integer" password-file="string"
test-on-borrow="true|false" test-on-return="true|false" test-while-idle="true|false"
time-between-eviction-runs="integer" when-exhausted-action="block|fail|grow">
<reset-tools-params collation="string" oracle-tnsnames="string" system-password="string"
system-username="string"/>
</dbcp-connection-pool>
</database>
The following list describes the attributes that you can configure on the <dbcp-connection-pool> element.
Note: These attributes apply only if you use the default connection pool. If you use the server connection pool,
these settings do not apply. Configure the server connection pool instead through the administration console
provided with the server. See the Installation Guide for more information.
jdbc-url Required. Stores connection information for the database. The format of
the jdbc-url value changes depending on the database type. See the
Installation Guide for more information.
The following attributes are all optional.
max-idle Maximum number of connections that can sit idle in the pool at any time.
If negative, there is no limit to the number of connections that can be idle
at any given time. The default is -1.
max-total Maximum number of connections that the connection pool can allocate,
including those in use by a client or that are in an idle state awaiting use. A
reasonable initial value for this is about 25% of the number of users that
you expect to use ClaimCenter at the same time.
If set to a negative integer, there is no limit to the number of allowed
database connections. The default is -1.
If the number of database connections reaches the value of max-total,
ClaimCenter considers the connection pool to have no more available
connections.
max-wait Maximum amount of time, in milliseconds, that the data source waits for a
connection before one becomes available in the pool to service. The
default is 30000.
The value of the max-wait attribute interacts with the when-exhausted-ac
tion attribute. See that attribute for more information.
min-evictable-idle-time Maximum time, in milliseconds, that a connection can sit idle in the pool
before it is eligible for eviction due to idle time. If a connection is idle more
the specified number of milliseconds, ClaimCenter evicts the connection
from the pool. The default is 300000 milliseconds.
If this value is a non-positive integer, ClaimCenter does not drop
connections from the pool due to idle time alone. This setting has no effect
unless the value of time-between-eviction-runs is greater than 0.
During an eviction run, ClaimCenter scans the connection pool and tests
the number of idle connections equal to the value of num-tests-per-evic
tion-run.
num-tests-per-eviction-run Number of idle connections that ClaimCenter tests in each eviction run.
This setting has no effect unless the value of time-between-eviction-run
s is greater than 0. The default is 3.
password-file Use to hide the value of the database connection password in the jdbc-ur
l connection string. Instead of providing the password in the connection
string, you can place the password in an external file and reference this file
from file database-config.xml. See the Installation Guide for more
information.
test-on-borrow Boolean. Whether ClaimCenter tests a connection by running a simple
validation query as ClaimCenter first borrows the connection from the
connection pool. If set to true, the connection pool attempts to validate
each connection before ClaimCenter uses the connection from the
connection pool. If a connection fails validation, the connection pool drops
the connection and chooses a different connection to borrow. The default
is false.
ClaimCenter returns any connection used only for a query to the pool
immediately after the query completes. Thus, running a test query every
time that a connection returns to the pool can potentially affect
performance.
test-on-return Boolean. Whether ClaimCenter tests a connection by running a simple
validation query as ClaimCenter returns the connection to the connection
pool. If set to true, the connection pool attempts to validate each
connection that ClaimCenter returns from the database. The default is fal
se.
ClaimCenter returns any connection used only for a query to the pool
immediately after the query completes. Thus, running a test query every
time that a connection returns to the pool can potentially affect
performance.
test-while-idle Boolean. Whether ClaimCenter performs validation on idle connections in
the connection pool. If set to true, the connection pool performs
validation on idle connections. It drops connections that fail the validation
test. The default is true.
This attribute value has no effect unless the value of time-between-evict
ion-runs is greater than zero.
time-between-eviction-runs Time, in milliseconds, that ClaimCenter waits between eviction runs of idle
connections in the connection pool. The default is 60000.
If set to a a non-positive integer, ClaimCenter does not launch any eviction
threads.
when-exhausted-action Specifies the behavior of the connection pool if the pool has no more
connections. Set this attribute to one of the following values:
• fail – If the there are no more connections available, ClaimCenter
throws a NoSuchElementException exception.
• grow – If there are no more connections available, ClaimCenter creates
a new connection and returns it, essentially making max-active
meaningless.
• block – If there are no more connections available, ClaimCenter blocks
connections until a new or idle connection becomes available. If the
value of max-wait is positive, then ClaimCenter blocks, at most, for that
number of milliseconds, after which ClaimCenter throws a NoSuchElem
entException exception. If the value of max-wait is non-positive,
ClaimCenter blocks indefinitely.
The default is block.
reset-tools-params See “The reset-tool-params Database Configuration Element” on page 228 for more information.
See also
• “The Database Configuration File” on page 220
• “Database Parameters” on page 368
<database>
<dbcp-connection-pool>
<reset-tools-params collation="string" oracle-tnsnames="string" system-password="string"
system-username="string"/>
</dbcp-connection-pool>
</database>
The following list describes the attributes that you can configure on the <reset-tools-params> element. All of
these attributes are optional.
collation Collation value to use if creating a new H2 (QuickStart) or SQL Server database:
• H2 – Sets database collation using the Java Collation class.
• SQL Server – Sets database collation to a Microsoft Window’s or SQL collation name.
DBResetTool (dropdb) uses the value of this attribute if creating a new H2 or SQL Server database.
<database>
<!-- Sets the data source for a JBoss, Tomcat, WebLogic, or WebSphere application server-->
<jndi-connection-pool datasource-name="string"
connections-initialized-for-application="true|false"/>
</database>
IMPORTANT If you modify the <jndi-connection-pool> element in any way, you must restart the application
server.
The following list describes the attributes that you can configure on the <jndi-connection-pool> element.
datasource-name Required. Specifies the JNDI name to assign to the data source. See the
Installation Guide for more information.
The following attribute is optional.
connections-initialized-for-application (Oracle) Boolean. Controls the number of SQL statements that ClaimCenter
executes on every connection that it borrows from an external data source.
This setting applies to the named JNDI database connection set up in this <
jndi-connection-pool> element only.
Valid values are:
• true – If configured appropriately, the data source provides
connections with certain Oracle database parameters set to their
desired values.
• false – ClaimCenter runs a set of SQL statements on each and every
database connection that it borrows from the data source.
The default is false.
To take advantage of this feature:
• Your ClaimCenter installation must use an Oracle database.
• You must configure the data source appropriately.
See “Configuring JNDI Connection Initialization for Oracle” on page 229 for
more information.
IMPORTANT If you set this attribute to true and do not initialize the
connections properly, the application server refuses to start and logs an
error message for each incorrect setting.
To use this feature, you must configure the application server to manage the connection initialization. See “Set
Oracle Database Parameters for Connection Initialization” on page 230 for more information.
Procedure
1. Set attribute connections-initialized-for-application on the <jndi-connection-pool> element to
true.
2. Start the application server.
3. From the server log, determine the exact values to set for connection initialization.
4. Depending on your application server, set up the connection initialization as appropriate.
See “Connection Initialization for Oracle Databases” on page 230 for more information.
5. After completing your initialization configuration, restart the application server and Guidewire ClaimCenter.
SQL BEGIN
DBMS_APPLICATION_INFO.SET_MODULE('ClaimCenter_CCPROD1', NULL);
EXECUTE IMMEDIATE 'ALTER SESSION SET NLS_SORT = BINARY_CI';
END;
Note: In the sample code, replace ClaimCenter_CCPROD1 with the actual name of the Oracle database followed by
the logon user username.
See also
• “Configuring JNDI Connection Initialization for Oracle” on page 229
END;
/
Note: In the sample code, replace Guidewire schema owner with Oracle database system username. Replace
ClaimCenter_CCPROD1 with the actual name of the Oracle database followed by the logon user username.
See also
• “Configuring JNDI Connection Initialization for Oracle” on page 229
<database>
<!-- Sets performance options for the named table, overrides values set at the database level -->
<loader-table drop-deferrable-indexes="disable|enable|enable_all"
fk-enable-degree-of-parallelism="integer" name="string"
row-counts-degree-of-parallelism="integer"/>
<loader-index key-columns="string"/>
</loader-table>
</loader>
</database>
The following list describes the attributes that you can configure on the <loader> element. All of these attributes are
optional.
callback Specifies parameters related to database parallelism and overrides for individual callbacks. See “The callback
Database Configuration Element” on page 233 for details.
loader-table Specifies parameters related to overrides for individual tables. See “The loader-table Database Configuration
Element” on page 235 for details.
See also
• “The Database Configuration File” on page 220
• “Table Import Options” on page 430
• Installation Guide
<database>
<loader>
<callback after-insert-select-callback-degree-of-parallelism="integer"
before-gen-ids-callback-degree-of-parallelism="integer"
before-insert-select-callback-degree-of-parallelism="integer"
insert-select-degree-of-parallelism="integer" name="string"/>
</loader>
</database>
The following list describes the attributes that you can configure on the <callback> element.
See also
• “The Database Configuration File” on page 220
• “The loader Database Configuration Element” on page 231
<database>
<loader>
<loader-table drop-deferrable-indexes="disable|enable|enable_all"
fk-enable-degree-of-parallelism="integer" name="string"
row-counts-degree-of-parallelism="integer"/>
<loader-index key-columns="string"/>
</loader-table>
</loader>
</database>
The following list describes the attributes that you can configure on the <loader-table> element.
The <loader-table> element has the following subelement. The <loader-table> element can contain any number
of occurrences of subelement <loader-index>.
loader-index If present, the <loader-index> element specifies the columns on the index to which the <loader-table>
overrides apply. This element has one required attribute, keycolumns. This attribute provides the index’s key
columns in an ordered, comma-separated list.
See also
• “The Database Configuration File” on page 220
• “The loader Database Configuration Element” on page 231
• Installation Guide
<database>
</database>
The following list describes the attributes that you can configure on the <oracle-settings> element. All of these
attributes are optional.
adaptive-optimization Specifies the behavior of the Oracle Adaptive Optimization feature. Valid values are:
• OFF
• REPORTING_ONLY
db-resource-mgr-cancel-sql Name of an Oracle Resource Consumer Group, if any. Guidewire currently uses this attribute
with Claim Search only. The main purpose of this attribute is to cancel long running queries
during the search.
query-rewrite Boolean. Whether to enable query rewrite.
Valid values are:
• true – Enable query rewrite and use a matching materialized view.
• false – Set the Oracle query-rewrite parameter to false to disable use of Oracle
materialized views.
If not present, Guidewire does not set this value at the session level
statistics-level-all Boolean. Whether to set the Oracle statistics_level parameter to ALL and enable
collection of detailed execution plan statistics.
The default is false.
stored-outline-category Name of the stored outline category to use, if any
See also
• “The Database Configuration File” on page 220
<database>
...
<sqlserver-settings jdbc-trace-file="string" jdbc-trace-level="string"
unicodecolumns="true|false"/>
...
</database>
The following list describes the attributes that you can configure on the <sqlserver-settings> element. All of
these attributes are optional.
jdbc-trace-file Specifies the name of the trace file. If you do not provide a file name, this value defaults to the
following:
C:\temp\msjdbctrace%u.log
ClaimCenter replaces the symbols in the file name at runtime with their meaning as listed at the
following web site.
http://java.sun.com/j2se/1.5.0/docs/api/java/util/logging/FileHandler.html
Use the listed symbols to uniquely name the trace file.
jdbc-trace-level Valid trace level as listed at the following web site:
http://msdn.microsoft.com/en-us/library/ms378517(SQL.90).aspx?ppud=4
unicodecolumns Required if starting a new database that exclusively uses Unicode-capable column character data
types (nvarchar, …). ClaimCenter ignores this attribute if the database does not support Unicode or
if the attribute is not relevant to the new database. The default is false.
<database>
<upgrade>
<mssql-db-ddl>
<!-- Sets SQL Server database options at the global, database level -->
<!-- Set SQL Server options for the named table, overrides values set at database level -->
<mssql-table-ddl table-name="string">
<mssql-index-ddl filter-where="string"index-compression="NONE|PAGE|ROW"
index-filegroup="string" key-columns="string" partition-scheme="string"/>
<mssql-table-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW"/>
<mssql-table-filegroups="string" index-filegroup="string" lob-filegroup
table-filegroup="string"/>
</mssql-table-ddl>
</mssql-db-ddl>
<ora-db-ddl>
<!-- Sets Oracle database options at the global, database level -->
<ora-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>
<tablespaces admin="string" index="string" lob="string" op="string" staging="string"
typelist="string"/>
<!-- Sets Oracle options for the named table, overrides values set at the database level -->
<ora-table-ddl table-name="string">
<ora-index-ddl index-compression="true|false" index-tablespace="string" key-columns="string"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>>
<ora-table-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE">
<ora-table-date-interval-partitioning datecolumn="string"
interval="DAILY|MONTHLY|QUARTERLY|WEEKLY|YEARLY">
<ora-table-hash-partitioning hash-columns="string" num-partitions="integer"/>
<ora-table-tablespaces index-tablespace="string" lob-tablespace="string"
table-tablespace="string"/>
</ora-table-ddl>
</ora-db-ddl>
<versiontriggers dbmsperfinfothreshold="integer">
<!-- Sets override options for the named database version trigger -->
<versiontrigger extendedquerytracingenabled="true|false" name="string"
parallel-dml="true|false" parallel-query="true|false"
queryoptimizertracingenabled="true|false" recordcounters="true|false"
updatejoinorderedhint="true|false" updatejoinusemergehint="true|false"
updatejoinusenlhint="true|false"/>
</versiontriggers>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <upgrade> element. All of these attributes
are optional.
allowUnloggedOperations Boolean. Whether to disable logging of certain SQL operations during the database
upgrade.
Valid values are:
• true – Run the upgrade with minimal database redo logging and enable direct-
path INSERT operations.
• false – Run the upgrade with standard database redo logging.
The default is false.
Note: If you run the upgrade with attribute allowUnloggedOperations set to true,
then you need to take a full database backup after the upgrade.
collectstorageinstrumentation (Oracle) Boolean. Whether ClaimCenter collects tablespace usage and object size
data before and after the upgrade.
Valid values are:
• true – ClaimCenter collects tablespace usage and size of segments such as
tables, indexes and LOBs (large object binaries) before and after the upgrade.
You can then compare the before and after values to find the utilization change
caused by the upgrade.
• false – ClaimCenter does not collect this data.
The default is false.
defer-create-nonessential-indexes Boolean. Whether to defer creation of non-essential indexes during the upgrade
process until the upgrade completes and the application server is back up. Creation
of non-essential indexes can add significant time to the upgrade duration.
Valid values are:
• true – Defer creation of non-essential indexes during upgrade.
• false – Do not defer creation of non-essential indexes during upgrade.
The default is false.
Non-essential indexes are:
• Performance-related indexes that do not enforce constraints.
• Indexes on the ArchivePartition column on all entities that ClaimCenter can
archive.
If you choose to defer creation of non-essential indexes, ClaimCenter runs the
Deferred Upgrade Tasks batch process (DeferredUpgradeTasks) as soon as the
upgrade completes and the server starts up. See “Deferred Upgrade Tasks Batch
Processing” on page 115 for more information.
deferDropColumns (Oracle) Boolean. Whether to drop table columns removed during upgrade
immediately or leave their removal to a later time. The database upgrade removes
some columns. For Oracle, you can configure whether the removed columns are
dropped immediately or are marked as unused. Marking a column as unused is a
faster operation than dropping the column immediately.
However, as ClaimCenter does not physically drop the removed columns from the
database, the space used by these columns is not released immediately to the table
and index segments.
Valid values are:
• true – Defer dropping removed columns until after the upgrade, possibly during
off-peak hours of operation. The ClaimCenter database upgrade marks the
removed columns as unused instead.
• false – Drop the removed columns immediately, during the upgrade process.
The default is true.
degree-of-parallelism (Oracle) Controls the degree of database parallelism that Oracle uses for INSERT, UP
DATE, and DELETE database operations.
Valid values are:
• 0 – Defers to Oracle to determine the degree of database parallelism for the
operations that the attribute configures. The Oracle automatic parallel tuning
feature determines the degree based on the number of CPU processors involved
and the value set for the Oracle parameter PARALLEL_THREADS_PER_CPU.
• 1 – Disables the parallel execution of DDL statements.
• Positive integer less than 1000 – Database parallelism, with the specified value
as the degree of parallelism.
The default is 4.
degree-parallel-ddl (Oracle) Controls the degree of database parallelism that Oracle uses to execute
DDL (Data Definition Language) statements during the database upgrade. Use to
configure the degree of database parallelism for commands such as CREATE INDEX
and the ALTER TABLE commands.
Valid values are:
• 0 – Defers to Oracle to determine to determine the degree of database
parallelism for the operations that the attribute configures. The Oracle
automatic parallel tuning feature determines the degree based on the number
of CPUs involved and the value set for the Oracle parameter PARALLEL_THREADS
_PER_CPU.
• 1 – Disables the parallel execution of DDL statements.
• Positive integer less than 1000 – Database parallelism, with the specified value
as the degree of parallelism.
The default is 4.
If you set the value of ora-parallel-dml to enable or enable_all (default), then
you need to provide a value for attribute degree-of-parallelism as well.
encryptioncommitsize Sets the commit size for rows requiring encryption. If one or more attributes use
ClaimCenter encryption, the ClaimCenter database upgrade commits batches of
encrypted values. The upgrade commits encryptioncommitsize rows at a time in
each batch.
The default value of encryptioncommitsize varies based on the database type:
• Oracle – 10000
• SQL Server – 100
Test the upgrade on a copy of your production database before attempting to
upgrade the actual production database. If the encryption process is slow, and you
cannot attribute the slowness to SQL statements in the database, try adjusting the
encryptioncommitsize attribute. After you optimized the performance of the
encryption process, use that value of encryptioncommitsize as you upgrade your
production database.
ora-parallel-dml (Oracle) Controls database parallelism usage by Oracle in the execution of DML
(Data Manipulation Language) operations.
Valid values are:
• disable – Oracle does not execute DML statements in parallel during upgrade.
• enable – Oracle executes DML statements in parallel during upgrade, if
configured to do so.
• enable_all – Oracle executes DML statements in parallel during upgrade in all
cases, unless turned off in the code or through configuration.
The default is enable_all.
If you set the value of ora-parallel-dml to enable or enable_all, then you need
to provide a value for attribute degree-of-parallelism as well.
Note: The value of this attribute interacts with the parallel-dml attribute on the <
versiontrigger> element. See “The versiontrigger Database Configuration
Element” on page 255 for more information.
ora-parallel-query (Oracle) Controls parallel query usage by Oracle during a database upgrade.
Valid values are:
• disable – Oracle does not use parallel queries during upgrade.
• enable – Oracle uses parallel queries during upgrade, if configured to do so.
The default is enable.
The value of this attribute interacts with the parallel-query attribute on the <ver
siontrigger> element. See “The versiontrigger Database Configuration Element”
on page 255 for more information.
sqlserverCreateIndexSortInTempDB (SQL Server) Boolean. Whether SQL Server stores temporary sort results in tempdb.
By using tempdb for sort runs, disk input and output is typically faster, and the
created indexes tend to be more contiguous. Valid values are:
• true – SQL Server stores sort results in tempdb.
• false – SQL Server stores sort results in the destination filegroup.
The default is false.
If you set sqlserverCreateIndexSortInTempDB to true, you must have enough
disk space available to tempdb for the sort runs, which, for the clustered index,
includes the data pages. You must also have sufficient free space in the destination
filegroup to store the final index structure, because SQL Server creates the new
index before deleting the old index.
Refer to the following web site for details on the requirements to use tempdb for
sort results.
http://msdn.microsoft.com/en-us/library/ms188281.aspx
updatestatistics (Oracle) Boolean. Whether to update table statistics during upgrade. The overall
time that it takes to upgrade the database is shorter if the database upgrade does
not update statistics.
Valid values are:
• true – Enables the upgrader to update statistics on changed objects. It also
allows the upgrader to maintain column level statistics consistent with what is
allowed in the code, data model, and configuration.
• false – Disable statistics generation during the upgrade.
If ClaimCenter does not update statistics during the upgrade:
• It reports a warning that recommends that you run the database statistics batch
process (DBStats) in incremental mode during the next maintenance window.
• It updates the Server Tools Upgrade and Versions screen to show that the upgrade
did not update statistics.
If ClaimCenter does generate statistics during the upgrade, it updates the Upgrade
and Versions screen to report the runs of the statistics batch process, including
incremental runs.
Note: Guidewire recommends that you run statistics in full mode after an upgrade
to a major ClaimCenter version.
See the following for more information:
• “Database Statistics Batch Processing” on page 114
• “Configuring Database Statistics Generation” on page 284
• “Upgrade and Versions” on page 393
verifyschema Boolean. Whether ClaimCenter performs a verification of the database schema
before starting the database upgrade. This process verifies that the ClaimCenter
data model matches the physical database. This process can take some time. The
default is true.
It is possible for the verification process to take some time. Guidewire recommends
that you perform this verification prior to starting the upgrade through the use of
the system_tools -verifydbschema command option. To use the command,
enter the following at a command prompt:
system_tools -password password -verifydbschema
See “System Tools Command” on page 422 for more information.
The <upgrade> element has the following subelements. Each of these elements is optional. There is, at most, a
single occurrence of each of these subelements on the <upgrade> element.
mssql-db-ddl Specifies options for SQL Server database DDL (Data Definition Language) statements. See “The mssql-db-
ddl Database Configuration Element” on page 242 for details.
ora-db-ddl Specifies options for Oracle database DDL (Data Definition Language) statements. See “The ora-db-ddl
Database Configuration Element” on page 247 for details.
versiontriggers Specifies options for named version triggers. See “The versiontriggers Database Configuration Element” on
page 254 for details.
See also
• “The Database Configuration File” on page 220
<database>
<upgrade>
<mssql-db-ddl>
<mssql-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW/>
<mssql-filegroups op="string" admin="string" typelist="string" staging="string"
index="string" lob="string"/>
<mssql-table-ddl table-name="string">
<mssql-index-ddl filter-where="string"index-compression="NONE|PAGE|ROW"
index-filegroup="string" key-columns="string" partition-scheme="string"/>
<mssql-table-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW"/>
<mssql-table-filegroups="string" index-filegroup="string" lob-filegroup
table-filegroup="string"/>
</mssql-table-ddl>
</mssql-db-ddl>
</upgrade>
</database>
mssql-compression Specifies compression settings for SQL Server database tables and indexes at the global, database level.
See “The mssql-compression Database Configuration Element” on page 242 for more information.
mssql-filegroups Specifies the mapping between SQL Server database filegroups and ClaimCenter logical tablespaces at
the global, database level. See “The mssql-filegroups Database Configuration Element” on page 243 for
more information.
mssql-table-ddl Specifies SQL Server database DDL options for a named table. These settings override values set at the
global, database level. See “The mssql-table-ddl Database Configuration Element” on page 244 for
more information.
See also
• “The Database Configuration File” on page 220
• “The upgrade Database Configuration Element” on page 237
• Installation Guide
<database>
<upgrade>
<mssql-db-ddl>
<mssql-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW/>
</mssql-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <mssql-compression> element.
index-compression If present, specifies the index compression setting for all indexes. Valid values are:
• NONE
• PAGE
• ROW
The default is NONE.
table-compression If present, specifies the table compression setting for all tables. Valid values are:
• NONE
• PAGE
• ROW
The default is NONE.
<database>
<upgrade>
<mssql-db-ddl>
<mssql-filegroups op="string" admin="string" typelist="string" staging="string"
index="string" lob="string"/>
</mssql-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <mssql-filegroups> element.
<database>
<upgrade>
<mssql-db-ddl>
<mssql-table-ddl table-name="string">
<mssql-index-ddl filter-where="string"index-compression="NONE|PAGE|ROW"
index-filegroup="string" key-columns="string" partition-scheme="string"/>
<mssql-table-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW"/>
<mssql-table-filegroups lob-filegroups="string" index-filegroup="string" table-filegroup="string"/>
</mssql-table-ddl>
</mssql-db-ddl>
</upgrade>
</database>
The <mssql-table-ddl> element has the following subelements. Each of these elements is optional. There is, at
most, a single occurrence of the <mssql-table-compression> and <mssql-table-filegroups> elements on the
<mssql-table-ddl> element. There can be, however, multiple occurrences of the <mssql-index-ddl> element.
mssql-index-ddl Specifies DDL options for a specific index. See “The mssql-index-ddl Database Configuration
Element” on page 245 for more information.
mssql-table-compression Specifies compression for the named table. See “The mssql-table-compression Database
Configuration Element” on page 245 for more information.
mssql-table-filegroups Specifies a filegroup to associate with a table, index, or LOB. See “The mssql-table-filegroups
Database Configuration Element” on page 246 for more information.
See also
• “The Database Configuration File” on page 220
• “The upgrade Database Configuration Element” on page 237
<database>
<upgrade>
<mssql-db-ddl>
<mssql-table-ddl table-name="string">
<mssql-index-ddl filter-where="string"index-compression="NONE|PAGE|ROW"
index-filegroup="string"
key-columns="string" partition-scheme="string"/>
</mssql-table-ddl>
</mssql-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <mssql-index-ddl> element.
key-columns Required. Comma-delimited list of key columns, in order. Specify DESC after the column name for a
descending sort order on the column.
The following attributes are all optional.
filter-where Specifies an index filter to add after the WHERE keyword in the SQL Server CREATE INDEX ... WHERE
statement. The filter that you create must conform to standard SQL Server rules.
index-compression Specifies the compression setting for the specified index. Valid values are:
• NONE
• PAGE
• ROW
If not specified, ClaimCenter uses the SQL Server database default.
index-filegroup Name of the filegroup associated with this index. Do not use this attribute if you supply a value for the p
artition-scheme attribute as the two attributes are mutually exclusive.
partition-scheme Name of a partition scheme for this index. Use of this attribute implies the use of ClaimCenter
clustering. Do not use this attribute if you supply a value for the index-filegroups attribute as the two
attributes are mutually exclusive.
The <mssql-table-compression> element has the following syntax. The following code sample shows required
attributes in bold font.
<database>
<upgrade>
<mssql-db-ddl>
<mssql-table-ddl table-name="string">
<mssql-table-compression index-compression="NONE|PAGE|ROW" table-compression="NONE|PAGE|ROW"/>
</mssql-table-ddl>
</mssql-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <mssql-table-compression> element. All
of these attributes are optional.
index-compression Specifies the index compression setting for the specified index. Valid values are:
• NONE
• PAGE
• ROW
If not specified, ClaimCenter uses the database default.
table-compression Specifies the table compression setting for the specified table. Valid values are:
• NONE
• PAGE
• ROW
If not specified, ClaimCenter uses the database default.
<database>
<upgrade>
<mssql-db-ddl>
<mssql-table-ddl table-name="string">
<mssql-table-filegroups table-filegroup="string" index-filegroup="string" lob-filegroups="string"/>
</mssql-table-ddl>
</mssql-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <mssql-table-filegroups> element. All
of these attributes are optional. However, if you do not specify at least one of these attributes, there is no need for
this element to be present in database-config.xml.
index-filegroup Name of the filegroup to associate with any indexes on this table.
lob-filegroup Name of the filegroup to associate with any large object (LOB, CLOB, or spatial column).
<database>
<upgrade
<ora-db-ddl>
<!-- Sets Oracle database options at the global, database level -->
<ora-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>
<tablespaces admin="string" index="string" lob="string" op="string" staging="string"
typelist="string"/>
<!-- Sets Oracle options for the named table, overrides values set at the database level -->
<ora-table-ddl table-name="string">
<ora-index-ddl index-compression="true|false" index-tablespace="string" key-columns="string"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>>
<ora-table-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE">
<ora-table-date-interval-partitioning datecolumn="string"
interval="DAILY|MONTHLY|QUARTERLY|WEEKLY|YEARLY">
<ora-table-hash-partitioning hash-columns="string" num-partitions="integer"/>
<ora-table-tablespaces index-tablespace="string" lob-tablespace="string"
table-tablespace="string"/>
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
ora-compression Specifies Oracle compression settings for all tables and indexes at the global, database level. See “The ora-
compression Database Configuration Element” on page 248 for more information.
ora-lobs Specifies attributes for LOB columns on all tables at the global, database level. See “The ora-lobs Database
Configuration Element” on page 248 for more information.
ora-table-ddl Specifies DDL parameters and overrides for a specific, named Oracle database table. See “The ora-table-
ddl Database Configuration Element” on page 250 for more information.
tablespaces Specifies default mappings for Oracle tablespaces at a global, database level. See “The tablespaces
Database Configuration Element” on page 249 for more information.
See also
• “The Database Configuration File” on page 220
• “The upgrade Database Configuration Element” on page 237
• Installation Guide
<database>
<upgrade>
<ora-db-ddl>
<ora-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE"/>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-compression> element. All of these
attributes are optional.
index-compression Boolean. Whether to use index compression for all indexes in an Oracle database. The default is false.
table-compression Specifies table compression type for all tables in an Oracle database.
Valid values are:
• ADVANCED
• BASIC
• NONE
The default is NONE.
<database>
<upgrade>
<ora-db-ddl>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-lobs> element. All of these attributes
are optional.
caching Boolean. Whether to use caching for all LOB columns on a table or for the Oracle database globally. The default is fa
lse.
<database>
<upgrade>
<ora-db-ddl>
<tablespaces admin="string" index="string" lob="string" op="string" staging="string"
typelist="string"/>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <tablespaces> element.
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-index-ddl index-compression="true|false" index-tablespace="string" key-columns="string"/>
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>>
<ora-table-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE">
<ora-table-date-interval-partitioning datecolumn="string"
interval="DAILY|MONTHLY|QUARTERLY|WEEKLY|YEARLY">
<ora-table-hash-partitioning hash-columns="string" num-partitions="integer"/>
<ora-table-tablespaces index-tablespace="string" lob-tablespace="string"
table-tablespace="string"/>
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
ora-index-ddl Specifies options for a specific Oracle index, based on key columns. See “The ora-
index-ddl Database Configuration Element” on page 251 for more information.
ora-lobs Specifies options for LOB columns on a specific, named table in an Oracle
database. See “The ora-lobs Database Configuration Element” on page 252 for
more information.
ora-table-compression Specifies compression options on a specific, named index or table in an Oracle
database. See “The ora-table-compression Database Configuration Element” on
page 252 for more information.
ora-table-date-interval-partitioning Specifies options for date range partitioning on a specific, named table in an
Oracle database. See “The ora-table-date-interval-partitioning Database
Configuration Element” on page 253 for more information.
ora-table-hash-partitioning Specifies options for hash partitioning of a specific, named table in an Oracle
database. See “The ora-table-hash-partitioning Database Configuration Element”
on page 253 for more information.
ora-table-tablespaces Specifies tablespace options for a specific, named table in an Oracle database.
See “The ora-table-tablespaces Database Configuration Element” on page 254
for more information.
See also
• “The Database Configuration File” on page 220
• “The upgrade Database Configuration Element” on page 237
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-index-ddl index-compression="true|false" index-tablespace="string" key-columns="string"/>
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-index-ddl> element.
key-columns Required. Ordered, comma-delimited list of key columns. Specify DESC after the column name for a
descending sort order on that column.
The following attributes are optional.
index-compression Specifies the index compression for this index. If you do not specify this attribute, ClaimCenter uses the
table or database default.
index-tablespaces Name of the tablespace override for the index.
ora-index-partitioning Defines partitioning for the specified Oracle index. The <ora-index-partitioning>
element has the following attributes:
• num-hash-partitions – The number of hash partitions to define. The default is 128.
• partitioning-type – Required if using this element. Sets the partitioning type to one
of the following:
◦ LOCAL – Inherit the partitioning type from the table
◦ HASH – Use hash partitions. If you set this attribute to HASH, then you need to specify
the number of partitions to use attribute num-hash-partitions. Do not set partiti
oning-type to HASH if you specify an <ora-index-range-partition> subelement.
◦ RANGE – Specify the range partitioning column list and the partition upper limits with
one or more ora-index-range-partition elements.
• range-partitioning-column-list – Optional. Use to specify the global range
partitioning column list. This attribute requires the definition of one or more ora-index
-range-partitioning elements. Do not specify the last range which is always VALUES
LESS THAN (MAXVALUE). Do not use if you set attribute partitioning-type to HASH.
The <ora-index-partitioning> element contains a single subelement:
• ora-index-range-partition – Optional. A comma-delimited, ordered list of literal
values corresponding to the column list in the range-partitioning-column-list
attribute. Use single quotes with string values. ClaimCenter uses this value in the clause
VALUES LESS THAN(value_list). Do not use if you set attribute partitioning-type
to HASH.
See also
• Installation Guide
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-lobs caching="true|false" type="BASIC|SECURE|SECURE_COMPRESSED/>>
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-lobs> element. All of these attributes
are optional.
caching Sets the LOB cache attribute for the named Oracle table. The default is false.
type Sets the LOB type for the named Oracle table. Valid values are:
• BASIC
• SECURE
• SECURE_COMPRESSED
The default is SECURE.
Note: SECURE and SECURE_COMPRESSED refer to the use of Oracle SecureFiles LOBs.
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-table-compression index-compression="true|false" table-compression="ADVANCED|BASIC|NONE">
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-table-compression> element. All of
these attributes are optional.
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-table-date-interval-partitioning datecolumn="string"
interval="DAILY|MONTHLY|QUARTERLY|WEEKLY|YEARLY">
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-table-date-interval-
partitioning> element.
datecolumn Required. Name of the column to use for the date range. The column must be non-nullable and one of the
following types:
• datetime
• dateonly
interval Required. The interval for each partition. Valid values are:
• DAILY
• MONTHLY
• QUARTERLY
• WEEKLY
• YEARLY
The <ora-table-hash-partitioning> element has the following syntax. The following code sample shows
required attributes in bold font.
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-table-hash-partitioning hash-columns="string" num-partitions="integer"/>
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-table-hash-partitioning> element.
All of these attributes are optional.
<database>
<upgrade>
<ora-db-ddl>
<ora-table-ddl table-name="string">
<ora-table-tablespaces index-tablespace="string" lob-tablespace="string"
table-tablespace="string"/>
</ora-table-ddl>
</ora-db-ddl>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <ora-table-tablespaces> element. All of
these attributes are optional.
lob-tablespace Name of the tablespace override for the specified LOB column.
table-tablespace Name of the tablespace override for the specified table.
The database upgrade executes a series of version triggers that make changes to the database to upgrade between
ClaimCenter versions. Usually, the default settings are sufficient. Change these settings only while investigating a
slow database upgrade.
The <versiontriggers> element has the following syntax. The following code sample shows required attributes in
bold font.
<database>
<upgrade>
<versiontriggers dbmsperfinfothreshold="integer">
<versiontrigger extendedquerytracingenabled="true|false" name="string"
parallel-dml="true|false" parallel-query="true|false"
queryoptimizertracingenabled="true|false" recordcounters="true|false"
updatejoinorderedhint="true|false" updatejoinusemergehint="true|false"
updatejoinusenlhint="true|false"/>
</versiontriggers>
</upgrade>
</database>
dbmsperfinfothreshold Specifies–for all version triggers–the threshold after which the database upgrader gathers
performance information from the database. The default is 600 (seconds).
If a version trigger takes longer than dbmsperfinfothreshold number of seconds to
execute, ClaimCenter:
• Queries the underlying database management system (DBMS).
• Builds a set of HTML pages with performance information for the interval in which the
version trigger was executing.
• Includes these HTML pages in the upgrader instrumentation for the version trigger.
You can completely turn off the collection of database snapshot instrumentation for
version triggers by setting the value of the dbmsperfinfothreshold attribute to 0. If you
do not have the license for the Oracle Diagnostics Pack, you must set dbmsperfinfothresh
old to 0 before running the upgrade.
The <versiontriggers> element has the following subelement, of which there can be multiple occurrences.
versiontrigger Provides override instructions for a specific, named, version trigger. See “The versiontrigger
Database Configuration Element” on page 255
See also
• “The Database Configuration File” on page 220
• “The upgrade Database Configuration Element” on page 237
<database>
<upgrade>
<versiontriggers>
<versiontrigger extendedquerytracingenabled="true|false" name="string"
parallel-dml="true|false" parallel-query="true|false"
queryoptimizertracingenabled="true|false" recordcounters="true|false"
updatejoinorderedhint="true|false" updatejoinusemergehint="true|false"
updatejoinusenlhint="true|false"/>
</versiontriggers>
</upgrade>
</database>
The following list describes the attributes that you can configure on the <versiontrigger> element.
Database Maintenance
This topic discusses key issues for configuring and maintaining the ClaimCenter database. While ClaimCenter
automatically handles most changes to its schema, involve a database administrator in tuning and managing the
database server.
IMPORTANT The versions of third-party products that Guidewire supports for this release are subject to change
without notice. For current system and patch level requirements, visit the Guidewire Community and search for
knowledge article 1005, Supported Software Components.
See also
• Installation Guide
Database Maintenance
If you need to perform any database maintenance tasks, such as applying a patch, shut down all ClaimCenter servers
that connect to the database. Restart the servers after the database maintenance is complete.
Run consistency checks on the database, especially after importing data.
Back up the database periodically to support disaster recovery options.
Monitor storage performance. If I/O (input/output) times are slower than 10ms, it indicates that there is most likely
an issue.
Monitor tablespace size allocations and disk space to ensure that ClaimCenter does not run out of space.
Update database statistics periodically so that the query optimizer selects an efficient plan for executing application
queries.
Database tables
For Oracle databases, keep the Oracle default settings as much as possible. For example, do not set a 4K block size.
Consult with Guidewire if you want to change the default Oracle settings.
Do not insert data directly into tables managed by ClaimCenter. This can cause the data distribution tool to fail and
cause other problems.
Do not add large numbers of mediumtext and CLOB columns to a table.
Do not add an index outside of ClaimCenter and not declare it in an extension file.
See also
• “Guidewire Database Direct Update Policy” on page 261
• “ClaimCenter Database Back Up” on page 262
• “Database Consistency Checks” on page 262
• “Understanding Database Statistics” on page 279
Before completing the startup process, ClaimCenter again verifies the data model against the physical database. If,
for some reason, the model and database disagree, ClaimCenter writes warnings to the log and, if possible, suggests
corrective actions. Take the corrective action if prompted to do so.
Note: If, for any reason, there is an interruption to the server during a database update, the server resumes the
update upon restart. ClaimCenter accomplishes this by storing the steps in the database and marking them
completed as part of the same database transaction that applies a change. This only applies to data model updates
and does not apply to product version upgrades.
See also
• “Run a Schema Verification Report” on page 261
• “About the Upgrade and Versions Screen” on page 259
• “View an Upgrade Report” on page 395
• “Understanding Guidewire Software Versioning” on page 395
• Configuration Guide
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
support after performing such an update query. Guidewire Support is not able to assist you with diagnosing and
correcting application issues caused by your database queries. It is your responsibility to restore ClaimCenter to a
consistent state.
WARNING Guidewire supports the in-built automatic database upgrade process only for Guidewire
InsuranceSuite products. Guidewire explicitly does not support any alternative process that executes SQL
DDL commands on the database.
If you have a legitimate need to update underlying application data, Guidewire recommends that you use Guidewire
APIs, either Java or Gosu, to perform the necessary updates. This ensures that you do not miss any critical side
effects of the updates in the process of altering the data. Using Guidewire APIs to update application data is safer
than using SQL queries with regard to consistency. However, with any programming language or API it is still
possible to update data incorrectly or in ways that do not perform well. Therefore, before using the APIs, Guidewire
strongly recommends that you review your intended updates with your Guidewire Support Partner and/or Guidewire
Professional Services team.
In the rare case in which no API exists to correct a data corruption problem, Guidewire can advise you on the SQL
queries to use to correct these problems. In these cases, the SQL queries used to update the database must be written,
or approved, by Guidewire. This process ensures that all SQL queries use correct logic and that you take all potential
side effects into account.
Do not apply any other SQL queries to modify data in a ClaimCenter database. Guidewire does not provide, nor
review, such queries for situations in which an API or supported alternate method is available.
The ClaimCenter log lists a check type for each consistency check that it runs. Each check type in the log can have a
value of either 0 or 1:
• A value of 0 indicates that ClaimCenter expects the check to return zero results if the database is consistent.
• A value of 1 indicates that ClaimCenter expects the check to have a different return value.
ClaimCenter uses the check type for custom checks:
• If a check type 0 fails, the log records a failure description that ClaimCenter expected the query issued by the
check to return zero rows. The log includes the SQL query run by the check and the number of inconsistencies
returned by the query.
• If a check type 1 fails, the log includes a specific failure description.
Note: The check type in the ClaimCenter log is not the same as the check type shown in the Typelists section of the
ClaimCenter Data Dictionary.
See also
• “Consistency Checks” on page 361
IMPORTANT Set the checker attribute on <database-config> to false under most circumstances. Guidewire
recommends that you do not set checker to true except in development environments with very small test data
sets.
See also
• “Configuring the Number of Threads for Consistency Checks” on page 265
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config:
a. Open file database-config.xml.
b. Add a checker attribute on the <database> element and set the attribute to true.
By default, Guidewire omits this attribute in the base configuration and sets it value to false.
c. Save and close file database-config.xml.
2. Restart the application server to trigger the database consistency checks:
• If working in a development environment, restart the QuickStart server.
• If working in a production environment, create a new WAR or EAR file and deploy the file to the
production server.
Next steps
See also
• “The Database Configuration File” on page 220
• “Configure Worker Threads for Consistency Checks in config.xml” on page 266
See also
• “Database Consistency Checks” on page 262
• “Consistency Checks” on page 361
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config→workqueue:
a. Open file work-queue.xml for editing.
b. Locate the block of code that contains the following workQueueClass value.
com.guidewire.pl.system.database.checker.DBConsistencyCheckWorkQueue
Next steps
See also
• “Configuring Work Queues” on page 96
Procedure
1. In the ClaimCenter Studio Project window, expand configuration→config:
a. Open file config.xml for editing.
b. Locate the ConsistencyCheckerThreads parameter.
In the base configuration, the value of this parameter is 1.
c. Set the value of this parameter to a number that meets your business needs.
For example, set this value to five.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user su (in
the base configuration) and you must supply that user’s password.
Note: Only run this process if you encounter consistency check failures. You can not schedule this process to run
periodically.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user su (in
the base configuration) and you must supply that user’s password.
Note: Only run this process if you encounter consistency check failures. You can not schedule this process to run
periodically.
IMPORTANT Guidewire does not support re-sizing any database columns that are part of the ClaimCenter base
configuration.
Procedure
1. Shut down ClaimCenter.
2. Alter the table and add a new temporary column that is the new size.
3. Copy all of the data from the source column to the temporary column.
4. Alter the table and drop the source column.
Depending on the database, it is possible that you need to set the data in this column to all nulls before you can
drop the column.
5. Alter the table and add the new source column that is the new size.
6. Copy the data from the temporary column to the new source column.
7. Alter the table and drop the temporary column.
8. Restart ClaimCenter.
size over time and can adversely affect performance and waste disk space. Excessive records in these tables also
negatively impacts the performance of the database upgrade.
Guidewire recommends that you periodically purge workflows, workflow log entries, and workflow items for
completed activities to improve database upgrade and operational performance and to recover disk space.
See also
• “Purging Workflow Data” on page 271
• “Purging Workflow Log Data” on page 272
• “Purging Work Item Set Data” on page 272
It is possible to launch Bulk Purge batch processing from within ClaimCenter, or, directly from a command prompt.
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Bulk Purge batch process.
Command Launch the Bulk Purge batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess BulkPurge
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Bulk Purge Batch Processing” on page 107
• “Understanding Claim Purging” on page 273
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Process History Purge batch process.
Command Launch the Purge Workflows batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess ProcessHistoryPurge
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Process History Purge Batch Processing” on page 122
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Cluster Members batch process.
Command Launch the Purge Workflows batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess PurgeClusterMembers
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Purge Cluster Members Batch Processing” on page 122
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Failed Work Items batch process.
Command Launch the Purge Workflows batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess PurgeFailedWorkItems
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Purge Failed Work Items Batch Processing” on page 123
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Message History batch process.
Command Launch the Purge Message History batch process from the ClaimCenter/admin/bin directory with the
prompt following command:
maintenance_tools -password password -startprocess PurgeMessageHistory
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Purge Message History Batch Processing” on page 123
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Old Transaction IDs batch process.
Command Launch the Purge Old Transaction IDs batch process from the ClaimCenter/admin/bin directory with the
prompt following command:
maintenance_tools -password password -startprocess PurgeTransactionIDs
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “The Work Queue Scheduler” on page 93
• “Purge Old Transaction IDs Batch Processing” on page 123
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Profiler Data batch process.
Command Launch the Purge Profiler Data batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess PurgeProfilerData
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Purge Profiler Data Batch Processing” on page 124
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Workflow batch process.
Command Launch the Purge Workflows batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess PurgeWorkflows
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Purge Workflow Batch Processing” on page 124
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Purge Workflow Logs batch process.
Command Launch the Purge Workflow Logs batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess PurgeWorkflowlogs
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Purge Workflow Logs Batch Processing” on page 124
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Work Item Set Purge batch process.
Command Launch the Purge Workflow Logs batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess WorkItemSetPurge
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Work Item Set Purge Batch Processing” on page 129
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
ClaimCenter Navigate to the Server Tools Batch Process Info screen and run the Work Queue Instrumentation Purge batch process.
Command Launch the Purge Workflows batch process from the ClaimCenter/admin/bin directory with the following
prompt command:
maintenance_tools -password password -startprocess WorkQueueInstrumentationPurge
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
See also
• “Work Queue Instrumentation Purge Batch Processing” on page 129
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
find the claim by searching. It is also not possible to restore a purged claim. Purging removes all traces of the claim
from the database and the archive.
For ClaimCenter to purge a claim, the claim must meet the following conditions:
• Unless it has draft status, the claim cannot have any active (unanswered) messages, or ClaimCenter throws an
ActiveMessageException.
• Unless it has draft status, the claim cannot be part of a workflow, or ClaimCenter throws an
ActiveWorkflowException.
Note: These conditions do not prevent you from purging an open claim.
As you purge a claim:
• All traces of the claim disappear from the main database. ClaimCenter uses the claim domain graph, which
identifies all tables and rows associated with the claim, for purging.
• ClaimCenter removes the claim from all claim associations. If there is only one claim remaining in the
association, ClaimCenter also removes the association itself.
• If there are no remaining claims associated with the claim’s policy period, ClaimCenter removes the policy
period.
Before doing your initial purge, Guidewire recommends that you back up the ClaimCenter database. As it is
probably not practical to back up the database before every purge, you need to develop a policy for making backups
that takes purges into account.
Table cc_purgedrootinfo
Whenever ClaimCenter purges a claim, it deletes the Claim domain graph instance for that claim, including the
ClaimInfo object that is the root of the domain graph instance. However, to track which claims it purged,
ClaimCenter creates a PurgedRootInfo entity at the same time it purges a claim to record the ClaimInfo public ID
of the purged claim. ClaimCenter then stores each PurgedRootInfo instance as a row in table cc_purgedrootinfo.
ClaimCenter does not automatically delete these entities from the table. If you have a high purge volume, Guidewire
recommends that you create a batch process to delete old, unwanted PurgedRootInfo instances from this table.
• In review
• Not valid
The error message includes the associated bulk invoice number. In the case of multiple bulk invoices causing the
failure, only the first invoice number is referenced.
See also
• “Understanding Claim Marking for Purging” on page 275
• “Purging Claims Using Command Prompt Tools” on page 275
• “Purge Claims Using Web Services” on page 276
• Rules Guide
See also
• “Maintenance Tools Command” on page 418
• “Bulk Purge Batch Processing” on page 107
Procedure
1. Open a command prompt and navigate to the following location in the ClaimCenter installation.
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
You must also supply a value for claimsforpurging, which must one of the following:
• A single claim number
• A comma-delimited list of claim numbers
• The name of a text file containing a list of claim numbers separated by commas or new lines
3. (Optional) If any of the claims that are you marking for purging have aggregate limits, include the -
purgefromaggregatelimit command option as well:
Procedure
1. Open a command prompt and navigate to the following location in your ClaimCenter installation.
admin/bin
2. Enter the following maintenance_tools command options to run the Bulk Purge batch process.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Result
ClaimCenter permanently deletes all claims that you marked for purging from the ClaimCenter database and the
archive. ClaimCenter also deletes the ClaimInfo record for each purged claim.
Schedule the batch Guidewire does not schedule the Bulk Purge batch process in the base configuration. You can
process schedule this batch process to run at regular intervals or at a convenient time after you complete
marking claims for purging.
Run the batch Call the startBatchProcess("bulkpurge") method on the IMaintenanceToolsAPI web service.
process
immediately
Result
After the Bulk Purge batch process completes, ClaimCenter deletes the marked claims.
Next steps
See also
• “Understanding Claim Purging” on page 273
• “Purging Claims Using Command Prompt Tools” on page 275
• “Bulk Purge Batch Processing” on page 107
• Integration Guide
Stale Data
In performance testing, Guidewire observed significant performance degradation if configuration forced the
materialized view to refresh on commit. This is due to a synchronization enqueue required by the refresh process.
However, any refresh of the data done outside of the commit operation can potentially display stale data during the
search.
Oracle uses a cost-based optimizer approach to determine whether to use a materialized view for a given query. It
also expects the data to be fresh for the rewrite. As Guidewire bases the refresh process on the number of changes to
contact and claim contacts, Guidewire strongly recommends that you schedule the refresh process accordingly.
Value Meaning
FORCE/STALE Oracle attempts to rewrite the query using an appropriate materialized view even if the optimizer cost
estimate is high. Oracle allows the rewrite even if the data in the materialized is not the same as in the base
tables.
FORCE/NOSTALE Oracle attempts to rewrite the query using an appropriate materialized view even if the optimizer cost
estimate is high. Oracle ignores the materialized view if the data in the view is not fresh.
COST/STALE If the Oracle cost-based optimizer evaluates the rewrite to be cheaper than other plans, it uses the
materialized view. If it is costlier to execute the rewritten path, then Oracle performs a join of the base
tables. The rewrite can happen even if the data in the view is stale.
COST/NOSTALE If the Oracle cost-based optimizer evaluates the rewrite to be cheaper than other plans, it uses the
materialized view. If it is costlier to execute the rewritten path, then Oracle performs a join of the base
tables. If the data in the view is not fresh, Oracle ignores the view and performs the join on the base tables.
This topic discusses database statistics, metadata that describe the underlying database.
IMPORTANT Have your database administrator (DBA) review the database statistics with you.
Note: A change in the value of useoraclestatspreferences takes effect only during an application upgrade.
Disable the automatic generation of database statistics using Oracle by doing one of the following:
• Disable the Oracle AutoTask “auto optimizer stats collection” automated task.
• Set the AUTOSTATS_TARGET preference to ORACLE. This action ensures that the automated task gathers
statistics for the Oracle Dictionary only.
If using DBStats batch processing to manage the collection of database statistics:
• Do not execute Oracle dbms_stats manually.
• Manually execute, or schedule, DBStats batch processing.
See also Disable Automatic Database Statistics Generation by Oracle in the System Administration Guide.
Procedure
1. Disable Oracle database statistics generation.
2. Delete the schema statistics.
3. Gather full database statistics using the Guidewire DBStats batch process.
IMPORTANT Consult with your Database Administrator before starting the database statistics process.
Some ClaimCenter batch processes use work or scratch tables to store intermediate calculations. Other batch
processes populate denormalized tables that ClaimCenter uses internally for performance reasons. These processes
can update database statistics on the scratch tables and denormalized tables during their execution.
As you import data into tables from an external source by using the table_import -integritycheckandload
command, ClaimCenter validates the staging tables against the data model and the production tables. To avoid the
formation of bad queries prior to loading new data, ClaimCenter updates statistics for the involved tables before and
after the load. For optimum performance, generate full database statistics after running the integritycheckandload
command option.
See also
• “Database Statistics Batch Processing” on page 114
• “Managing Database Statistics using System Tools” on page 282
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
• “Table Import Command” on page 430
For database upgrades, ClaimCenter updates database statistics for objects that the upgrade process changes
significantly. For optimum performance, generate full database statistics during the next maintenance window after
performing a major upgrade.
Full database Generates database statistics for every table in the ClaimCenter database.
statistics
Incremental Generates database statistics for tables for which the change in the table data caused by inserts and deletes
database exceeds a certain percentage threshold. You specify this threshold through the incrementalupdatethresho
statistics ldpercent attribute on the <databasestatistics> element in file database-config.xml. The default is 10
percent.
It is possible to pause the database statistics updating process, just as you can with other work queues. Use the
Server Tools Work Queue Info page to pause an in-progress work queue.
See also
• “Understanding Database Statistics” on page 279
• “System Tools Command” on page 422
admin/bin
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Procedure
1. Ensure that the ClaimCenter is running.
2. Open a command prompt and navigate to the following location in the ClaimCenter installation directory:
admin/bin
3. Enter the following command to update statistics for tables exceeding the change threshold.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
admin/bin
3. Enter the following command to check on the state of the process that updates database statistics.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Procedure
1. Ensure that the ClaimCenter is running.
2. Open a command prompt and navigate to the following location in the ClaimCenter installation directory:
admin/bin
3. Enter the following command to cancel the process that updates database statistics.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Procedure
1. Ensure that the ClaimCenter server is running.
2. Open a command prompt and navigate to the following location in the ClaimCenter installation directory:
admin/bin
3. Enter the following command to generate database statistic SQL statements for all tables.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Result
ClaimCenter groups the output statements by table.
admin/bin
3. Enter the following command to generate database statistic SQL statements for tables exceeding the change
threshold.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Result
ClaimCenter groups the output statements by table.
configuration→config→workqueue. Set parameters for the work queue with workQueueClass set to
com.guidewire.pl.system.database.dbstatistics.DBStatisticsWorkItemWorkQueue. For example:
<work-queue
workQueueClass="com.guidewire.pl.system.database.dbstatistics.DBStatisticsWorkItemWorkQueue"
progressinterval="86400000">
<worker instances="5" batchsize="10"/>
</work-queue>
In this configuration, the statistics generation work queue uses five worker threads and each worker checks out ten
work items at a time.
If you edit work-queue.xml, you must rebuild and redeploy ClaimCenter.
See also
• “The Work Queue Scheduler” on page 93
In the following example, an Oracle database connection shows the use of these parameters:
The previous example configures the following database statistic generation behavior:
• Collects statistics on all ClaimCenter tables using the automatic sampling size and with a degree of parallelism of
4.
• Samples table cc_table1 using Oracle AUTO_SAMPLE_SIZE, which is the recommended value.
• Uses a parallel degree of 4.
• Defines a histogram with 254 buckets (time slots) on cc_check.scheduledsenddate. Guidewire requires that
you provide a value for this attribute.
• Deletes statistics on cc_table2 due to the attribute action="delete".
See also
• “The Database Statistics Element” on page 285
• “The Table Statistics Database Element” on page 289
databasedegree On Oracle, this attribute controls the degree of parallelism for each individual
statement. The default is 1. ClaimCenter uses the value of this attribute for all
statements.
SQL Server ignores the databasedegree attribute.
incrementalupdatethresholdpercent Specifies the percentage of table data that must have changed since the last
statistics process for the incremental statistics generation batch process to update
statistics for the table.
The default is 10.
numappserverthreads On both Oracle and SQL Server, the numappserverthreads attribute controls the
number of threads that ClaimCenter uses to update database statistics for staging
tables during import only. Command prompt tool table_import launches this
import.
The value defaults to 1. If the value is greater than 1, then the ClaimCenter server
assigns a table at a time to each thread as the thread becomes available. Each
thread executes all of the database statistics statements for its assigned table.
For all other statistics generation operations, set the number of threads by
specifying the number of workers for the database statistics work queue. Set the in
stances attribute on the <workers> subelement of the <work-queue> element for
the database statistics work queue. This element has workQueueClass="com.guide
wire.pl.system.database.dbstatistics.DBStatisticsWorkItemWorkQueue".
The default is 1.
samplingpercentage On Oracle, this attribute controls the value of the estimate_percent parameter in
the dbms_stats.gather_table_stats() SQL statements. You can set samplingpe
rcentage to an integer from 1 to 100 to directly set the estimate_percent value.
However, Guidewire recommends highly that you set the samplingpercentage
value to 0 to set estimate_percent to AUTO_SAMPLE_SIZE. The default value is 0.
On SQL Server, the samplingpercentage attributes controls the value of the WITH
FULLSCAN/SAMPLE PERCENT clause in the UPDATE STATISTICS statements. A value
of 100, the default, translates into WITH FULLSCAN, as does a value of 0.
The default is 0.
useoraclestatspreferences On Oracle, this attribute sets the database statistics preferences to be able to use
the Oracle Autotask infrastructure instead of the DBStats batch process from
ClaimCenter. The default is false, which requires that you disable the Autotask and
schedule DBStats batch processing in its place. Changes to the value of this
attribute only take effect during an application upgrade.
The values you set for these attributes apply to all the tables in the database. You can fine tune these values and set
specific values on individual tables by using the <tablestatistics> subelement. Setting values on a specific table
overrides the values set on the database for just that table.
See also
• “Configuring Database Statistics Generation” on page 284
• “The Database Statistics Element” on page 285
• “Using Oracle AutoTask for Statistics Generation” on page 287
• “Table Import Command” on page 430
Any change to this attribute value takes effect only during an upgrade, either a full upgrade or a rolling
(configuration) upgrade. To force ClaimCenter to recognize the change without an application upgrade, increment
the application metadata version and restart the application server.
After you set this attribute to true, the next application upgrade does the following:
• It clears all existing preferences for table statistics.
• It resets the preferences for table statistics to those currently defined for Oracle table statistics in database-
config.xml.
• It creates a new tab named Oracle Statistics Preferences in the Server Tools Info Pages→Database Catalog Statistics
Information screen.
To confirm that the application upgrade set the database statistics preferences, review the Oracle Statistics Preferences
tab and verify the preferences.
EXEC dbms_stats.set_global_prefs('AUTOSTATS_TARGET','AUTO');
EXEC dbms_auto_task_admin.enable(client_name => 'auto optimizer stats collection', operation => NULL,
window_name => NULL);
EXEC dbms_stats.delete_schema_stats('CCUSER');
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'CCUSER', options=>'GATHER');
Setting the updatestatistics attribute to true allows the ClaimCenter upgrader to create any additional
histograms required by the new application version.
For more information, refer to the following Oracle documentation:
• Oracle Database Administrator’s Guide, "Managing Automated Database Maintenance Tasks"
• Oracle Database SQL Tuning Guide, "Managing Optimizer Statistics: Basic Topics"
Resetting useoraclestatspreferences
Any change to the useoraclestatspreferences attribute in database-config.xml (from false to true or from
true to false) takes effect only after an upgrade, either a full upgrade or a rolling (configuration) upgrade.
However, if you reset this attribute from true to false, ClaimCenter throws an exception during the next upgrade
and prevents the upgrade from continuing due to locked table statistics in the Oracle database. Review the details of
the exception provided in the server log to determine which table statistics need to be unlocked. See “Revert to
DBStats Batch Processing for Database Statistics” on page 288 for information on how to unlock the table statistics.
See also
• “The Oracle Statistics Preferences Tab” on page 375
Procedure
1. Reset attribute useoraclestatspreferences to false in file database-config.
2. Start the application server in upgrade mode.
The upgrade fails due to locked table statistics in the Oracle database.
3. Review the server log for which table statistics need to be unlocked.
Search for text that is similar to the following:
By default, ClaimCenter on Oracle does not generate statistics on any table used for processing work items.
ClaimCenter deletes any existing statistics on these tables whenever it updates statistics. You can override this
behavior by using the action attribute of the <tablestatistics> element. You can set the action attribute to one
of the following values:
delete Delete the statistics on the table. This value does nothing in SQL Server.
force Update statistics for this table while running incremental statistics, regardless of the value of attribute incrementalu
pdatethreshold on the <databasestatistics> element.
keep Keep the existing statistics. ClaimCenter does not update statistics for any table for which the user explicitly specifies
keep as the value for the action attribute. This value affects any type of database.
See also
• “Configuring Database Statistics Generation” on page 284
• “The Database Statistics Element” on page 285
• “The Histogram Statistics Database Element” on page 290
The name attribute specifies a column name. The numbuckets attribute controls the maximum number of buckets for
the specified histogram. Guidewire requires that you provide a value for this attribute. The default value for the
number of buckets is 254 for the retired and subtype columns. For all other columns, ClaimCenter uses 75, the
database default.
Notes
• For performance reasons, ClaimCenter does not currently create a histogram on publicid columns. These
columns are rarely, if ever, referenced in a WHERE clause.
• Also for performance reasons, ClaimCenter tries to combine as many columns as possible into a single statement.
Certain tabs in the Database Catalog Statistics page display a dbms_stats.gather_table_stats(...'FOR
COLUMNS ...') statement with only the associated column for each histogram, regardless of the parameter
values. This enables you to specify the most granular statement if a given histogram is out of date.
See also
• “Configuring Database Statistics Generation” on page 284
• “The Database Statistics Element” on page 285
• “The Table Statistics Database Element” on page 289
Guidewire categorizes certain types of application data as administration data. (Guidewire frequently shortens this
tem to just admin data.) For example, activity patterns are administration data. This topic provides information
related to importing administrative data into, and exporting data from, Guidewire ClaimCenter. While users enter
much of the information into ClaimCenter directly, at times, it is more convenient or necessary to enter information
in bulk.
IMPORTANT Never modify ClaimCenter operational database tables directly with SQL commands.
IMPORTANT Guidewire recommends that you set the value of this configuration parameter to true in a non-
production test environment only.
See also
• “Automatic Import of Business Rules at Server Startup” on page 344
• Configuration Guide
admin/bin
4. Run the following command to import the data in file roleprivileges.csv in the import→gen folder:
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
Next steps
See also
• “Character Set Encoding for File Import” on page 294
• “Import Tools Command” on page 417
See also
• “Import Tools Command” on page 417
Procedure
1. Export the ClaimCenter roles.
2. Export the administration data.
3. Import the roles into the new system.
4. Import the administration data into the new system.
build/dictionary/data/index.html
The Data Dictionary describes the structure of business objects stored persistently by ClaimCenter, and the
dictionary defines the properties and foreign key references for each object. If you change the data model, regenerate
the ClaimCenter Data Dictionary by using the following command:
gwb genDataDictionary
See also
• Configuration Guide
Public ID Prefix
Each entity that you import into ClaimCenter requires a unique public ID. This is separate from the system ID that
ClaimCenter assigns internally and uses for most system processing. Foreign key references between related objects
use this public ID.
Typically, a company imports data from multiple external sources. If you do import data from multiple sources, use
a naming convention to generate public IDs for external sources. For example, if you import from two systems
(Adminsystem and Salessystem), each source can have a contact entity with ID=5432. Thus, Guidewire
recommends that you use the following ID format so that the IDs do not register as duplicates:
origin:ID
By using this format, the contact from the first system comes in as adminsystem:5432 and the contact from the
second system comes in as salessystem:5432. Thus, there is no risk of duplicate IDs. There is also the benefit of
knowing from which system the record originated.
Public IDs need to be unique only within objects of the same type. For example, all policy objects must have a
different public ID. However, a claim and a policy with the same public ID do not conflict with each other. Public
IDs cannot exceed 20 characters in length.
ClaimCenter appends this public ID prefix to each administrative entity created in an individual ClaimCenter
configuration. Thus, the public ID format becomes the following:
{PublicIDPrefix}:{ID}
If you have multiple developers, you must coordinate the public ID prefix so that no public ID prefixes overlap. For
example, each engineer can use their initials or computer names as a public id prefix.
You can use the same prefix for multiple development and testing databases if you do not ever transfer data between
them.
1: ADDRESS
2: type,data-set,entityid,addresstype,addressline1,createuser
3: Address,0,ab:1001,home,1253 Paloma Ave.,import_tools
4: Address,0,ab:1002,business,325 S. Lake Ave.,import_tools
The import_tools command distinguishes between two types of information in an import file:
• Heading information
• Data information.
ClaimCenter treats any line that contains the string entityid as a heading.
ClaimCenter considers as data any line:
• Without an entityid string
• With comma delimited values
• With a value in its third comma-delimited field
In the example shown, the import_tools command treats line 2 as a heading and lines 3 – 4 as data. The
import_tools command ignores line 1. If the command encounters a data line before a heading line, it returns an
error.
IMPORTANT Guidewire supports using the import_tools command to import administrative data only.
type,data-set,entityid,addresstype,addressline1,createuser
The following fields follow the three required fields (type, data-set, and entityid):
• addresstype – Represents a typelist
• addressline1 – Represents a column
You do not have to specify all fields in the entity within the import file. You must specify at least the required fields.
You can determine which fields ClaimCenter requires by viewing the entity description in the ClaimCenter Data
Dictionary.
Use lowercase to specify fields, including arrays. In this example, specify AddressLine1 in the data model as
addressline1 in the import file.
To specify a foreign key, use the foreign key name without the concluding ID. In this example of a Person import:
1: type,data-set,entityid,firstname,lastname,primaryaddress,workphone,primaryphone,
taxid,vendortype,specialtytype
2: Person,0,demo_sample:1,Ray,Newton,demo_sample:4000,818-446-1206,work,,,
3: Person,0,demo_sample:2,Stan,Newton,demo_sample:4002,818-446-1206,work,,,
4: Person,0,demo_sample:3,Harry,Shapiro,demo_sample:1004,818-252-2546,work,,,
5: Person,0,demo_sample:4,Bo,Simpson,demo_sample:1003,619-275-2346,work,,,
6: Person,0,demo_sample:5,Jane,Collins,demo_sample:4003,213-457-6378,work,,,
7: Person,0,demo_sample:6,John,Dempsey,demo_sample:1006,213-475-9465,work,,,
The primaryaddress field is a foreign key to the Address entity. It appears as PrimaryAddressID in the
ClaimCenter Data Dictionary but as primaryaddress in the import data.
If you specify a field in the heading that is not a recognizable column, typelist, foreign key or array, the import
program silently ignores the column and any associated data. In the following example, the import ignores the %$zed
heading field and the somedata value in line 3:
1: ADDRESS
2: type,data-set,entityid,addresstype,addressline1,createuser, %$zed
3: Address,0,ab:1001,home,1253 Paloma Ave.,import_tools, somedata
4: Address,0,ab:1002,business,325 S. Lake Ave.,import_tools,
1: Policy
2: type,data-set,entityid,account,corepolicynumber,policytype,producttype,productversion,
systemofrecorddate,,,,,,,,,,,
3: Policy,0,ds:1,ds:1,34-123436-CORE,wc,wc_workerscomp,1,1/1/2002,,,,,,,,,,,
4: Policy,0,ds:2,ds:1,25-123436-CORE,bop,bop_businessowners,1,1/1/2002,,,,,,,,,,,
5: Policy,0,ds:3,ds:3,54-123456-CORE,personalauto,pa_personalauto,1,1/1/2002,,,,,,,,,,,
6: Policy,0,ds:4,ds:4,25-708090-CORE,bop,bop_businessowners,1,1/1/2002,,,,,,,,,,,
7: Policy,0,ds:5,ds:2,98-456789-CORE,bop,bop_businessowners,1,1/1/2002,,,,,,,,,,,
8: Policy,0,ds:6,ds:1,20-123436-CORE,businessauto,ba_businessauto,1,1/1/2002,,,,,,,,,,,
9: Policy,0,ds:7,ds:1,50-123436-CORE,umbrella,u_umbrella,1,1/1/2002,,,,,,,,,,,
...
In lines 3 - 9, the entity name Policy appears in the first field as required. The capitalization of an entity or subtype
name must be identical to that used in the ClaimCenter Data Dictionary. For example, to create a RevisionAnswer
data line the entry name would be invalid if you specified it as revisionanswer.
2: type,data-set,entityid,account,corepolicynumber,policytype,producttype,productversion,
systemofrecorddate,,,,,,,,,,,
3: Policy,0,ds:1,ds:1,34-123436-CORE,wc,wc_workerscomp,1,1/1/2002,,,,,,,,,,,
ClaimCenter orders data-sets by inclusion. Thus, data-set 0 is a subset of data-set 1 and data-set 1 is a subset of data-
set 2, and so forth. It is possible to request a particular data-set while converting CSV to XML. By default,
ClaimCenter requests data-set 10240. ClaimCenter assumes that data-set 10240 includes every data-set that it is
possible to create.
You can leave the second field blank, in which case ClaimCenter always includes this object in the import regardless
of the requested data-set.
1 ADDRESS
2 type,data-set,entityid,addresstype,addressline1,createuser
3 Address,0,ab:1001,home,1253 Paloma Ave.,import_tools
4 Address,0,ab:1002,business,325 S. Lake Ave.,import_tools
6
7 PERSON
8 type,data-set,entityid,firstname,lastname,primaryaddress,inaddressbook,loadrelatedcontacts,
referred,contactaddresses
9 Person,0,ab:2001,John,Foo,ab:1002,true,true,true,ContactAddress|address[ab:10001,ab:1002]
10 Person,0,ab:2002,Paul,Bar,ab:1002,false,false,false,ContactAddress|address[ab:10001]
11 Person,0,ab:2003,David,Goo,,false,true,false,,
In the previous example, the primaryaddress on line 9 is a foreign key to the Address specified on line 4.
If ClaimCenter cannot resolve a foreign key reference and does not require the foreign key, ClaimCenter imports the
data, sets the foreign key field to null, and reports an error. If ClaimCenter does require the foreign key, then
ClaimCenter reports an error and does not import that data.
arraykey|foreignkey[publicID,publicID,...]
In the PERSON example (line 9), the arraykey value is the array key on the parent entity (Person). The foreignkey
is the foreign key name of the array without the ID. ContactAddress is the array key and address is the foreign key
name. The public ID values [publicID,publicID,...] correspond to public IDs that are referenced by the foreign
key.
In this format, the arraykey is optional. However, you might want to retain it for readability.
[ [array_entry];[array_entry]; ...]
Enclose each array_entry in brackets. Separate multiple entries with semicolons. Enclose all completed entries in a
second set of brackets. Each array_entry is made up of comma-separated [type|value] pairs as follows:
[[[type|value],[type|value]];[[type|value],[type|value]]]
The type is the name of a column, typelist, or foreign key, as in a heading line. The value is the column value,
typelist typecode, or a foreign key. In the following sample, there are three array_entry specifications, the first
and last array_entry specifications appear in bold:
Group
type,data-set,entityid,users
Group,0,demo_sample:27,[[[user|demo_sample:101],
[loadfactor|50],[loadfactortype|loadfactorview]];[[user|demo_sample:102],
[loadfactor|100],[loadfactortype|loadfactoradmin]];
[[user|demo_sample:103],[loadfactor|50],[loadfactortype|loadfactorview]]]
Procedure
1. First, export the current administrative data as an XML file by using the functionality available on the Export
Data screen available to ClaimCenter administrators. This screen provides the ability to export various types of
administrative data in XML format.
2. Modify the file to suit your business needs, carefully preserving the XML formatting for administrative data.
3. Regenerate the XSD files by using the following command in the ClaimCenter installation directory:
gwb genImportAdminDataXsd
It is important to regenerate the XSD files every time that you modify the data model.
4. Re-import the modified file by using either the import_tools command or the Export Data screen available to
ClaimCenter administrators. This screen provides the ability to import administrative data in XML format into
ClaimCenter.
Next steps
See also
• “Importing and Exporting Administrative Data from ClaimCenter” on page 303
• “Constructing the XML for the Administrative Data Import File” on page 300
• “Import Tools Command” on page 417
gwb genImportAdminDataXsd
Regenerate the XSD files any time you modify the data model. These files are likely to change as you configure the
data model.
The following XML example shows the default activity pattern 30 Day Diary from the ClaimCenter administrative
export file.
<?xml version="1.0"?>
<import xmlns="http://guidewire.com/pc/exim/import" version="p5.86.a12.309.46"
usePeriodicFlushes="true">
<ActivityPattern public-id="sample_pattern:19">
<ActivityClass>task</ActivityClass>
<AutomatedOnly>false</AutomatedOnly>
<Category>reminder</Category>
<Code>30_day_diary</Code>
<Command/>
<Description/>
<Description_L10N_ARRAY/>
<DocumentTemplate/>
<EmailTemplate/>
<EscBusCalLocPath/>
<EscalationBusCalTag/>
<EscalationDays/>
<EscalationHours/>
<EscalationInclDays/>
<EscalationStartPt/>
<Mandatory>false</Mandatory>
<PatternLevel>All</PatternLevel>
<Priority>normal</Priority>
<Recurring>true</Recurring>
<ShortSubject/>
<ShortSubject_L10N_ARRAY/>
<Subject>30 day diary</Subject>
<Subject_L10N_ARRAY/>
<TargetBusCalLocPath/>
<TargetBusCalTag/>
<TargetDays>30</TargetDays>
<TargetHours/>
<TargetIncludeDays>elapsed</TargetIncludeDays>
<TargetStartPoint>activitycreation</TargetStartPoint>
<Type>general</Type>
</ActivityPattern>
</import>
You can:
• Modify any existing entry in the administrative export file and re-import the file.
• Add additional entries by using the existing entries as a model and re-import the file.
gwb genImportAdminDataXsd
You can validate the XML of your import file against a cc_import.xsd file by using the following code:
uses java.io.File
uses java.io.FileInputStream
uses javax.xml.validation.SchemaFactory
uses javax.xml.XMLConstants
uses javax.xml.parsers.SAXParserFactory
uses org.xml.sax.helpers.DefaultHandler
uses gw.testharness.TestBase
Constructing the XML for the Administrative Data Import File 301
System Administration Guide 9.0.5
spf.NamespaceAware = true
var parser = spf.newSAXParser();
var fis = new FileInputStream("myImportFile.xml")
parser.parse(fis, new DefaultHandler())
IMPORTANT ClaimCenter supports this command for importing administrative data, but not for importing other
types of data. Instead, use staging tables or APIs other than the ImportToolsAPI API to import non-administrative
types of data into ClaimCenter.
ClaimCenter uses the public ID of each object in the data import to determine if an object with that public ID
already exists in the database. See “Administrative Data and the ClaimCenter Data Model” on page 295 for a
discussion of public IDs
During import, if ClaimCenter finds a match in entity public IDs, it does the following:
• If there is no difference between the import record and the database record, ClaimCenter ignores the import
record.
• If there are differences between the two records, ClaimCenter overwrites any existing database record values
with the values from the import file. ClaimCenter does not throw concurrent data change exception if the
imported records overwrite existing records in the database.
• If there are null entries for a record in the import file, ClaimCenter nulls out those values in the record in the
database.
IMPORTANT Guidewire supports using the import_tools command to import administrative data only.
See also
• “Import Tools Command” on page 417
Procedure
1. Create a CSV or XML file describing the data, by using one of the following methods:
• Create the XML or CSV file manually.
• Export the current administrative data as an XML file from ClaimCenter and modify the file.
2. Import the CSV or XML file by using the import_tools command:
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
The -import option requires that you provide the name of the file to import (fileName). There are a number
of other options that you can set as well.
Next steps
See also
• “Constructing a CSV File for Import” on page 296
• “Construct an XML File for Import” on page 299
• “Import Tools Command” on page 417
Importing Arrays
ClaimCenter handles arrays differently depending on whether it is importing an owned array or a virtual array:
• Owned array – If an entity owns the array, ClaimCenter notifies you of the differences between the imported
data and any existing data. However, you do not have the choice of resolving the array elements. ClaimCenter
only gives you the option to delete the current array and replace all of the contents of the imported array.
• Virtual array – It is not possible replace the contents of a virtual array with those of an imported array as it is
not possible to delete a virtual array.
Procedure
1. Log into Guidewire ClaimCenter as a user with the viewadmin and soapadmin permissions.
2. Select the Administration tab.
3. In the left-hand navigation pane, expand Utilities→Import Data.
4. Click Browse... to search for the XML file containing data to import.
5. Click Finish to import data from the file.
Next steps
During an import, ClaimCenter does not run validation rules. However ClaimCenter does run pre-update rules. For
this reason, run user exception and group exception batch processing after you import administrative data.
IMPORTANT If a particular data set is not on the export list, you cannot export it by using this function.
During export or import of users and groups, ClaimCenter also exports or imports any entities referred to by any
User or UserRole object through a foreign key or array.
You can export XML-formatted administrative data only.
Procedure
1. Log into Guidewire ClaimCenter as a user with the viewadmin and soapadmin permissions.
2. Select the Administration tab.
3. In the left-hand navigation pane, expand Utilities→Export Data.
4. Select the data set to export.
5. Click Export to download the XML file.
Role Definitions
File roles.csv contains a list of ClaimCenter roles, along with a human-readable name and description for each
role. Within this file, set the name and description fields to whatever is useful in uniquely identifying the role.
ClaimCenter reads the file, starting with the first row that contains the entityid identifier and imports the data into
the database.
The following code samples are examples of role definition entries:
Roles,
type,data-set,entityid,description,name,carrierinternalrole,roletype
Role,0,superuser,Superuser with full permissions,Superuser,true,user
Role,0,underwriter_supervisor,Base permissions for a supervisor,Underwriting Supervisor,true,user
Role,0,underwriter,Permissions for underwriter,Underwriter,true,user
Role,0,underwriter_asst,Permissions for underwriter assistant,Underwriter Assistant,true,user
Role,0,processor,Permissions for processor,Processor,true,user
,,,,,,
type,data-set,entityid,permission,role
RolePrivilege,0,default_data:2,abview,adjuster
RolePrivilege,0,default_data:3,abviewsearch,adjuster
RolePrivilege,0,default_data:4,actcreate,adjuster
RolePrivilege,0,default_data:5,actcreateclsd,adjuster
RolePrivilege,0,default_data:6,actown,adjuster
RolePrivilege,0,default_data:7,actqueuenext,adjuster
,,,,
Each row in file roleprivileges.csv maps a single permission to a role. Each role has multiple permissions and
thus multiple rows. For example, the abview entry grants permission to view the address book to the adjuster role.
The ClaimCenter Security Dictionary provides a full list of role permission, along with a brief description of each. It
also provides a list of the correspondences between roles and permissions.
IMPORTANT It is not possible to add additional administrative data through this mechanism after the initial
database upgrade. ClaimCenter loads the administrative data contained in the gen folder only if starting from an
empty (non-upgraded) database.
See also
• “Ways to Import Administrative Data” on page 291
• Application Guide
AuthorityLimit,0,default_data:1,,default_data:1,,ctr,,,15000
AuthorityLimit,0,default_data:2,,default_data:1,,cptd,,,15000
AuthorityLimit,0,default_data:4,,default_data:2,,ctr,,,25000
AuthorityLimit,0,default_data:5,,default_data:2,,cptd,,,25000
AuthorityLimit,0,default_data:7,,default_data:3,,ctr,,,100000
AuthorityLimit,0,default_data:8,,default_data:3,,cptd,,,100000
...
The definitions in this example give the Adjuster profile (default_data:1) a $15,000 limit for total claim total
reserves (ctr) for general liability coverage.
Consult the ClaimCenter Data Dictionary for information on the AuthorityLimitType typelist, which contains the
possible limit type codes (for example, ctr). The costtype and coveragetype fields similarly contain code values.
You need not specify a costtype or coveragetype. You can, as these samples illustrate, simply leave these value
empty.
306 chapter 18 Importing and Exporting Administrative Data
System Administration Guide 9.0.5
http://www.cdc.gov/nchs/icd/icd10cm.htm
http://www.cms.hhs.gov/icd9ProviderDiagnosticCodes/07_summarytables.asp
Procedure
1. Open the PDF file of new codes.
2. If using Adobe Acrobat Professional, export the file to RTF (Rich Text Format). Alternatively, you can copy
the data into an RTF, page by page.
3. You may want to convert all the text to uppercase so that the data looks consistent with older codes. However,
this step is optional.
4. Copy the tables from the RTF into Microsoft Excel.
5. Set the code column to text data type, otherwise ClaimCenter truncates all leading and trailing zeros.
6. The column format needs to be as follows to load correctly into the ICDCode table:
type,data-set,entityid,bodysystem,code,codedesc,chronic,expirydate,availabilitydate
Key Value
type ICDCode
data-set 0
entityid icd:xxxx
This is the unique ID for the table. Check the last row in the existing ICD CSV in ClaimCenter/ cc/
config/referencedata and increment by 1 for each row. You can use the Microsoft Excel auto-
increment functionality to create entity IDs beyond the first one.
Key Value
bodysystem Add the numeric value that corresponds to the range the ICD code falls into. Review the ICDBodySy
stem typelist for code ranges.
You must supply a password (for the logon user). Enter the file name of the file to import (fileName).
Procedure
1. Open the PDF file of revised diagnosis code titles. ClaimCenter stores the code title as the diagnosis
description.
2. Log in to ClaimCenter as a user with the following permissions:
• View Admin
• View reference data
• Edit reference data
3. Navigate to the Administration tab.
4. Click Business Settings→ICD Codes.
5. For each code listed in the PDF, enter the code in the Code field.
6. Click Search.
7. Click the ICD Code value.
8. Click Edit.
9. Copy the description from the PDF and paste it into the Description field in ClaimCenter.
10. Click Update.
11. Repeat steps 5 through 11 until you have reached the end of the PDF.
Procedure
1. Open the PDF file of invalid diagnosis codes.
2. Log in to ClaimCenter as a user with permissions View Admin, View reference data, and Edit reference data.
3. Click the Administration tab.
4. Click Business Settings→ICD Codes.
5. For each code listed in the PDF, enter the code in the Code field.
6. Click Search.
7. Click the ICD Code value.
8. Click Edit.
9. Set the Expires On date.
10. Click Update.
11. Repeat steps 5 through 10 until you have reached the end of the PDF.
Security Zones screen Define new security zones using the Security Zones screen in the ClaimCenterAdministration tab. See
the Application Guide for details.
File security-zones.xml Define additional security zones in file security-zones.xml. ClaimCenter loads this data as part of
the administration data load into an empty database at initial server startup. See “Understanding
the Security Zones File” on page 309 for more information.
Export/import of data Import new security zone data using the export and import functionality accessible in the
ClaimCenterAdministration tab. See “Import Security Zones” on page 310 for more information.
See also
• For information on ClaimCenter import of administration data at initial server startup, see “About the import
Directory” on page 292.
Procedure
1. Open a command prompt and navigate to the ClaimCenter installation directory
2. Run the following command:
gwb genImportAdminDataXsd
3. In the file system, navigate to the following location in the ClaimCenter installation directory:
build/xsd
<SecurityZone public-id="cc:236">
<Description>Some meaningful description…</Description>
This example sets the public-id value to cc:236. Chose a public ID value that makes business sense for your
organization.
6. After saving the file, navigate to the following location in ClaimCenter:
Administration→Utilities→Import Data
7. Browse to find your modified file and click Next.
If there are conflicts between the administrative data in the import file and the existing administrative data,
ClaimCenter provides a mechanism for conflict resolution. You can chose to do one of the following:
• Overwrite all existing data with the imported data
• Discard updates to any existing data and keep the existing data
• Interactively resolve each data conflict on a case-by-case basis
8. After resolving all data conflicts, click Finish.
Result
You can now view and use the updated set of security zones in the Administration Security Zones screen without server
restart.
Next steps
See also
• “Construct an XML File for Import” on page 299
• “About Exporting ClaimCenter Administrative Data” on page 304
• “About Importing ClaimCenter Administrative Data ” on page 303
Procedure
1. Start the ClaimCenter server.
2. Clear the zone data staging tables.
If you have multiple countries defined, you can include the -country countryCode option to clear staging
zone data only for the country you want to load:
zone_import -password password -clearstaging [-country countrycode]
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
3. Load the zone data file into the staging tables:
zone_import -password password -country countrycode -import filename
4. Clear existing zone data in production. Perform this step if zone data already exists for the country whose data
you intend to load.
zone_import -password password [-country countrycode] -clearproduction
5. Set the ClaimCenter server run level to MAINTENANCE:
system_tools -password password -maintenance
6. Load zone data from the staging tables into production:
table_import -server url -password password -integritycheckandload -zonedataonly
7. Set the server run level back to MULTIUSER:
system_tools -password password -multiuser
Next steps
See also
• For information on using the zone_import command, see “Zone Import Command” on page 434.
• For information on using the table_import command, see “Table Import Command” on page 430.
• For more information on database staging tables, see the Integration Guide.
• For information on the web service ZoneImportAPI that also imports zone data, see the Integration Guide.
each country. ClaimCenter stores the zone-config.xml files for base-configuration countries in country-specific
folders. Navigate to the following location in the Studio Project window to view these files:
configuration→config→geodata
For example, Guidewire provides zone configuration data for France in file zone-config.xml in the following
location:
configuration→config→geodata→FR
Each country-specific zone-config.xml file must have a single top-level <Zones> element for that country.
Underneath each <Zones> element are <Zone> elements that define the zone fields to import from zone data files for
that country. For each field, the fileColumn attribute indicates the position of the field within lines of the files.
The following example XML code from a zone-config.xml file defines the fields to import from zone data files for
the United States. The example code specifies for United States zone data files that the third field in the comma-
separated values of each line of corresponds to a city.
<Zones countryCode="US">
...
<Zone code="city" fileColumn="3" granularity="2">
...
See also
• For complete information about zone-config.xml, including a description of its XML elements and attributes,
see the Globalization Guide.
The free-text batch load command loads the Guidewire Solr Extension, a full-text search engine, with index
documents for all claim contacts in your ClaimCenter application database. Index documents are XML documents
that contain a subset of the information from claim contacts in ClaimCenter. The Guidewire Solr Extension indexes
the documents after it receives them from the free-text batch load command.
Note: The free-text batch load command runs on the host where the Guidewire Solr Extension resides. The
command is located in the /opt/gwsolr/cc/solr/claimcontact_active/conf directory, not the ClaimCenter/
admin/bin directory.
See also
• Installation Guide
• Configuration Guide
• Your instance of ClaimCenter is upgraded to a later version, and the upgrade changes metadata definitions for
index documents stored in the Guidewire Solr Extension.
Do not run the free-text batch load command if you configured free-text search for embedded operation. Whenever
the Guidewire Solr Extension runs in embedded mode, use the Server Tools Batch Process Info screen to execute the
Solr Data Import batch process instead. If you run the batch load command with embedded operation, ClaimCenter
limits the number of index items to 10,000.
See also
• “Batch Process Info” on page 349
Guidewire recommends that you take a backup of the current index before running each free-text batch load
command. This backup provides a snapshot of the index that you can restore if the batch load does not complete
successfully. For information about how to perform a backup, see the Solr documentation on the following web site:
https://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.10.pdf
Procedure
1. In ClaimCenter, suspend the CCSolrMessageTransport message destination.
2. Shut down and restart the Guidewire Solr Extension.
Shutting down the Guidewire Solr Extension forces it to pick up any changes to data-config.xml.
3. Navigate to the following directory in your Solr installation:
/opt/gwsolr/cc/solr/claimcontact_active/conf
http://host:8983/cc-gwsolr/cc_claimcontact_active/dataimport?command=full-import&entity=claimcontact
This topic describes a tightly constrained system for updating data on a running production server other than through
PCF pages or web services. Guidewire calls this mechanism the Production Data Fix tool.
WARNING Only use the Production Data Fix tool under extraordinary conditions, with great caution, and upon
advice of Guidewire Support. Before registering a data change on a production server, register and run the data
change on a development server. Guidewire recommends multiple people review and test the code and the
results before attempting the data change on a production server.
WARNING Only use the Production Data Fix tool under extraordinary conditions, with great caution, and upon
advice of Guidewire Support. Before registering a data change on a production server, register and run the data
change on a development server. Guidewire recommends multiple people review and test the code and the
results before attempting the data change on a production server.
Separation of Permissions
To decrease security risks, the Production Data Fix tool separates its action into separate tasks, each of which has
different permissions and entry points.
By having multiple different paths and multiple different roles, there is no single point of attack.
IMPORTANT Guidewire recommends that you force separation of responsibilities into two different ClaimCenter
users. Give each user either the wsdatachangeedit permission (to register data change code) or the
admindatachangeexecd permission (to execute the code), but not both permissions.
Preserving Results
ClaimCenter captures the results of script execution. This increases accountability and makes debugging easier.
Replay Prevention
To prevent replay attacks, the Production Data Fix tool runs each registered script a maximum of one time. If you
need to run it again, you must first re-register the script and create a new change control reference.
See also
• “Logging Data Change Operations” on page 324
WARNING Carefully test your data change code. Guidewire strongly recommends that multiple people review
and approve the code for safety and correctness before proceeding.
To persist changes to the database, use the gw.Transaction.Transaction class and its method runWithNewBundle.
You pass the method a block that runs code. If the block does not throw an exception, ClaimCenter persists any data
changes from your Gosu block to the database. If the block throws an exception, no changes persist to the database.
Design your data change code to minimize the number of entity instances you change. Too many changes in entity
data increases the chance of memory issues or concurrent data exceptions.
Save your Gosu code to a local file that ends in .gsp.
See also
• Gosu Reference Guide
IMPORTANT If you need to re-run a successful data change, you must first re-register the script with a new
reference ID. This is requirement preserves the integrity of the results log.
See also
• “Writing Gosu Data Change Code” on page 321
WARNING Before registering a data change on a production server, register and run the data change on a
development server first. Guidewire recommends multiple people review and test the code and the results
before attempting the data change on a production server.
Procedure
1. Ensure that the ClaimCenter application server is running.
2. Open a command prompt and navigate to the following location in the ClaimCenter installation:
admin/bin
data_change –description DESCRIP –edit REFID –gosu PATH –server SERVERURL –user USER –password PW
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
For example:
Result
The script outputs results such as:
Running data_change.gsp
Connecting as su to URL http://localhost:8080/cc/ws/gw/webservice/systemTools/DataChangeAPI
Edit change ref=REFID1234 publicId=cc:1
Next steps
See also
• “Data Change Command Tool Reference” on page 324
WARNING Before registering a data change on a production server, register and run the data change on a
development server first. Guidewire recommends multiple people review and test the code and the results
before attempting the data change on a production server.
Procedure
1. Call the DataChangeAPI web service method updateDataChangeGosu.
2. Pass the method the following arguments as String objects:
• The reference ID
• A human-readable description
• The Gosu code to run
For example:
Result
The method call returns the public ID of the new DataChange entity instance.
Next steps
See also
• “Register a Data Change Using a Command Prompt” on page 321
• “Data Change Command Tool Reference” on page 324
WARNING Only use the Production Data Fix tool under extraordinary conditions, with great caution, and upon
advice of Guidewire Support. Before registering a data change on a production server, register and run the data
change on a development server first. Guidewire recommends multiple people review and test the code and the
results before attempting the data change on a production server.
Procedure
1. Log into your ClaimCenter development environment as an administrator with the admindatachangeview
permission.
2. Select the Administration tab.
3. Expand Utilities→Data Change.
4. In the list of data changes, use the Reference column to find the data change request by its reference ID.
5. Click on the data change row in that list.
The screen shows the Gosu code for that data change.
6. Review the Gosu code to confirm it is what you expect.
7. Click Execute.
During code execution, ClaimCenter does not display the results immediately in the Result pane. Instead, the
status of Executing shows in the list of data changes.
8. After a few seconds, click Reload on the screen to view the current status and results.
• If the change is successful, ClaimCenter confirms the success in a message that uses your reference ID:
• If the change was not successful, ClaimCenter shows any compile errors or exceptions in the user interface
in the Result pane.
9. Confirm your changes in the database and check your logging results from the change.
10. If the data change appears safe in your development environment, carefully register and run the data change
on the production server.
Next steps
See also
• “Writing Gosu Data Change Code” on page 321
// For demonstration, get a User object and make minor data change to the first name
var u = gw.api.database.Query.make(User).select().first()
bundle.add(u)
u.Contact.FirstName = u.Contact.FirstName + "SUFFIX"
// To log arbitrary text in Data Change UI, get a results writer (type is java.lang.Appender)
var rw = DataChange.util.ResultsWriter
rw.append("Add arbitrary log message here\n")
// enable detailed logging of each property value before and after our change
DataChange.util.setDetailResultWriting(bundle)
})
To test and debug your code in Studio Scratchpad, you may want to print to the console using the standard print
statement. Also, add one more argument to the runWithNewBundle method to represent a user name. For example,
pass the String value "su" to create your writable bundle as the super user.
WARNING Only use the Production Data Fix tool under extraordinary conditions, with great caution, and upon
advice of Guidewire Support. Before registering a data change on a production server, register and run the data
change on a development server. Guidewire recommends multiple people review and test the code and the
results before attempting the data change on a production server.
data_change -help
data_change -password password [-server url] [-user user] {
-edit refID -gosu filepath [-description desc] |
-discard refID |
-status refID |
-result refID }
See also
• For a description of how and when to use the data_change command to change data on a running production
server, see “Production Data Fix Tool” on page 319.
• For a description of how to use the DataChangeAPI web service, see “Data Change Web Service Reference” on
page 325.
• For specifics of the data_change command options, see “Data Change Options” on page 415.
See also
• “Overview of the Production Data Fix Tool” on page 319
• “Typical Use of the Production Data Fix Tool” on page 320
• “Data Change Command Tool Reference” on page 324
• Integration Guide
ClaimCenter provides the following user roles for use with business rules.
IMPORTANT If you are running ClaimCenter business rules on a production server, you must set
BizRulesDeploymentEnabled to true and provide a value for BizRulesDeploymentId.
See also
• Configuration Guide
See also
• “Understanding the Configuration Registry Element” on page 46
• Configuration Guide
A rule must go through the following states before you can deploy the rule:
• Draft
• Staged
• Approved
A rule must be in the approved state before it is possible to deploy the rule. Approving a rule does not deploy the
rule. You must actively deploy a rule before ClaimCenter evaluates that rule at runtime.
Rule deployment is distinct from rule import. On completing a rule import, you must deploy any new approved rule
versions before ClaimCenter can evaluate the rules.
In general, Guidewire recommends that you use separate cluster environments for each of the following in working
with ClaimCenter business rules:
• Development
• Test
• Production
Typically, you deploy a rule version in a production environment only. In a production environment, you must also
set configuration parameter BizRulesDeplymentEnabled to true.
See also
• “Business Rule Versioning” on page 331
• “Business Rule State” on page 332
Draft ClaimCenter assigns a status of Draft after you save the initial version of the rule. A rule reverts to Draft status
whenever the rule undergoes any type of editing. It is not possible export a rule in the Draft state.
Staged You manually move a rule in the Draft state to the Staged state, after you complete rule editing. Typically, this is the
point in the rule lifecycle that you export the rule from the development environment and import the rule into the
testing environment.
Approved You manually move a rule from the Staged state to the Approved state, usually in the testing environment after you
complete the rule evaluation phase. Typically, this is the point in the rule lifecycle that you export the rule from the
testing environment and import it into the development environment.
Deployed You manually move a rule from the Approved state to the Deployed state. Typically, this occurs after you import the
rule into the production environment.
The following diagram describes the various states associated with a business rule, as it is created, edited, and
deployed in development and production environments.
Create/Edit Rule
Approved Deployed
Export Back to
Implementation Staged Development
Specialist Export Rule and
Testing*
Staged Approved
Export Business Rule Business Rule
Rule
* Export deployed rules from Production and import back to Development and Testing to keep all environments in sync.
Guidewire does not require that you move the rule back to development before making changes. However, if you do
not export a deployed rule back to the development server, the version number on the development server does not
correspond to the version on production. For example, if you never export the rule back to development, the version
number stays at 0+ on the development server, and increases each time you deploy it on production.
The following table shows rule version and status change for the same rule, except that you do not export the rule
from production to development.
BizRules
BizRules.Audit
BizRules.Autocomplete
BizRules.Compiler
BizRules.ContextHelp
BizRules.Export
BizRules.Import
BizRules.UI
BizRule.Validation
The business rule information that ClaimCenter logs includes the following:
• The rule context
• The rule ID
• Whether the rule condition evaluates to true or false
• The rule action parameters as a list of name-value pairs, which is empty if the condition evaluates to false
See also
• “Application Logging” on page 25
• “Set Log Level” on page 357
ERROR BizRules.Validation Rule Test Rule has validation error: Could not resolve symbol
for : SomeActivity
Guidewire recommends that you review the application log after starting the application server to determine if
ClaimCenter reports any rule as invalid.
You transfer rules between different server environments by exporting the rules from one server environment and
importing the rules into the other server environment. Typically, one or more appointed Rules Administrators
manage the export and import of business rules between different ClaimCenter server environments.
ClaimCenter Action
Imports the rule ClaimCenter imports the rule if any of the following are true:
• The rule for import is new and does not exist in the importing server environment.
• The rule for import is an update to an existing rule in the importing server environment.
Does not import ClaimCenter does not import the rule into the importing server environment if any of the following are true:
the rule • If there is no difference between the existing rule and the rule for import.
• If the existing rule is already an update of the rule for import.
Raises a conflict ClaimCenter raises a conflict if it cannot automatically decide whether to import the rule. This can happen
for example, if the existing rule and the rule for import both have updates that conflict.
ClaimCenter Action
As a concrete example, suppose that you update the rule on the testing server. Then you independently
update the rule on the development server. You export the rule from the development server and import it
into the testing server. The testing server raises a conflict while trying to import the rule. Within
ClaimCenter, you must choose whether to keep the existing rule or take the importing rule.
This type of rule conflict can occur also if you update or customize a default business rule and Guidewire
later provides an updated version of that rule in an upgrade. In that case, you must choose either to keep
one or the other of the updated rules or manage the merge of the two rules.
Rule export does not export utility functions, context objects, or other changes to the business rules plugin. You
must propagate changes to Gosu functions or context objects on the development server to the testing and
production servers.
Source and Target Data Models and Rule Export and Import
The business rules data model in the source and target systems must match. Otherwise, the rule import into the target
system fails. Ensure that the business rules data model of the source and target systems is the same before exporting
any business rule. If necessary:
• Modify the business rules data model of one system so that it matches the business rules data model of the other
system.
• Regenerate the business rules export file.
• Repeat the business rules import operation.
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the following screen:
Administration→Business Settings→Business Rules→Activity Rules
3. From the More drop-down list, chose one of the following options:
Export Exports only the selected business rules. ClaimCenter exports the selected rules from the currently active
Selected screen only.
Export All Exports all business rules visible on all pages. ClaimCenter does not export rules hidden because of filters.
Nor does ClaimCenter export rules in the Draft state that have never been deployed. If a rule is in the
Draft state and has one or more previously deployed versions, ClaimCenter exports the last deployed
version.
Next steps
See also
• “Import Business Rules into Guidewire ClaimCenter” on page 339
• “The Import/Export Status Screen” on page 340
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the following administrative screen:
Administration→Business Settings→Business Rules
3. Choose one of the following options:
Next steps
See also
• “Export Business Rules from Guidewire ClaimCenter” on page 338
• “The Import/Export Status Screen” on page 340
• “About the Business Rules Export File” on page 340
Rule Functionality
conflicts?
No If there are no issues to resolve for a given rule import operation, ClaimCenter displays a Review button in the
Imports table for that rule. Clicking Review opens the Review Import screen.
Yes If there are pending rule conflicts to resolve for a given rule import operation, ClaimCenter displays a Complete
Import button in the Imports table for that rule. Clicking Complete Import opens the Complete Import screen.
The Review Import and Complete Import screens are identical except that the functionality changes depending on
whether you are reviewing a rule import operation or resolving a rule import conflict.
The Review Import / Complete Import screen consists of the following distinct sections:
• “The Rule Import Disposition Table” on page 341
• “The Rule Import Manage Synchronization Table” on page 341
To access the business rules Import/Export Status screen, navigate to the following location within Guidewire
ClaimCenter:
Administration→Business Settings→Business Rules→Import/Export Status
Column Description
Outstanding The subcategories under Outstanding have the following meanings:
• New Rule – Number of rules in the import file that have no equivalent on the importing server.
• New Version – Number of rules in the import file that are newer versions of rules on the importing server.
• Rule Deployment – Number of rules being imported that have a deployed version.
• Version Conflict – Number of rules for which there are version conflicts between the importing rules and
the existing rules on the importing server.
Note: If there are no rule conflicts for a given import operation, all subcategories for that operation display
zero (0).
Imported Number of rules ClaimCenter imported from the import file.
Discarded Number of rules discarded by user action.
Applied Edited Number of rules that the Rule Administrator edited and saved.
No Change Number of rules that require no additional action. These rules exist in the importing server already and do
not require updating. For example, the rule in the import file is an earlier version of the rule version on the
importing server.
To access the Import/Export Status screen, navigate to the following location in Guidewire ClaimCenter:
Administration→Business Settings→Business Rules→Import/Export Status
If you undertake an action to resolve a rule import conflict, you must save and apply your change. See “Resolve
Rule Import Conflicts” on page 343 for details.
The following list describes the various columns in the table.
Column Description
Rule Name Name of the rule. ClaimCenter adds a warning next to the rule name if the importing rule name duplicates the
name of a differently configured rule on the importing server.
Status Current status of the rule. Some examples are:
• New Rule – This rule does not exist on the importing server.
• New Rule Version – This version of the rule does not exist on the importing server.
• No Change – The existing rule is a more updated version of the importing rule.
• Rule Deployment – The existing version of the rule needs deployment to match the rule version of the importing
rule.
• Versions conflict – There are conflicts between the existing and importing versions of the rule that you must
manage manually.
Depending on the rule status, it is possible for the list of possible actions in the Available Actions column to change.
Available If there is no import conflicts with the given rule, there are no actions available in this column.
Actions If there is a conflict between the importing and existing rule versions, the following actions are available:
• Existing Version – Keep the existing rule on the importing server and discard the imported rule.
• Importing Version – Keep the importing rule version and discard the existing rule version on the importing
server.
• Compare – Click to open the Compare Rules screen. Use this screen to compare the two rules side-by-side to
view their similarities and differences and determine the course of action to take.
Existing The version of the rule that exists in the importing server. Click to access a read-only view of the rule.
Version
Importing The version of the rule in the import file. Click to access a read-only view of the rule.
Version
To access the Import/Export Status screen, navigate to the following location in Guidewire ClaimCenter:
Administration→Business Settings→Business Rules→Import/Export Status
Keep Existing Chose to retain the currently existing rule and discard the importing rule. If you select this option, ClaimCenter
Version reopens the Complete Import screen with the Existing Version radio button selected for the rule under Available
Actions.
Replace with Chose to accept the importing rule and discard the existing rule. If you select this option, ClaimCenter reopens
Importing the Complete Import screen with the Importing Version radio button selected for the rule under Available Actions.
Version
Edit Select one of the following from the Edit drop-down list to modify either the existing or importing rule:
• Existing Version
• Importing Version
If you choose to edit a rule version, ClaimCenter opens the rule in edit mode. Editing a rule creates a new Draft
version of the rule.
If you save the edited rule, it becomes the resolved rule version for import completion. After making this update,
you can no longer select either the importing or existing rule in the Complete Import screen. The status of the rule
in the Complete Import screen changes to Edited Version. If you select the rule and apply your change, the Applied
Edited field increments by one (1).
ClaimCenter disables the edit functionality if the existing rule version is in Draft mode.
See also
• “Resolve Rule Import Conflicts” on page 343
Procedure
1. Navigate to the following location in Guidewire ClaimCenter:
Administration→Business Settings→Business Rules→Import/Export Status
2. For each rule import that has a status other than Completed, click Complete Import.
The Complete Import screen opens.
3. Review each rule that shows an import conflict in the Complete Import screen. Do one of the following:
a. Click the Existing Version or the Importing Version link to see the details for that rule version.
b. Click the Compare link to open the Compare Rules screen in which you can view the two rule versions in a
side-by-side table.
4. (Optional) After reviewing the two rules in the Compare Rules screen, do one of the following:
• Click Keep Existing Version to accept the existing rule on the import server with no change.
• Click Replace with Importing Version to accept the import rule with no change.
• Select an Edit option to open an editable view of either the existing or importing version of the rule.
• Note: ClaimCenter disables the edit functionality if the existing rule version is in Draft mode.
5. (Optional) Make your edits the Compare Rules screen, then click Save Edited Version.
If you edit either of the rule versions in the Compare Rules screen, that version of the rule becomes the resolved
version of the rule. Thereafter, you cannot ever use the importing or existing version of the rule to resolve the
rule conflict.
ClaimCenter reopens the Complete Import screen.
6. In the Complete Import screen, select the resolution type by selecting a radio button in the Available Actions cell
for the rule of interest. Your choices are:
• Use the existing rule version.
• Use the importing rule version.
• Use the new, edited, version of the rule.
7. Select the rule for update by selecting the checkbox to the left of the rule name.
8. Save and apply your changes by clicking one of the following function buttons:
Import Selected Enabled if you select one or more rules. It must be possible to import these rules.
Discard Selected Enabled if you select one or more rules.
Import All Remaining Enabled if there are no remaining unresolved conflicts. It must be possible to import these rules.
Discard All Remaining Enabled if you select one or more rules.
9. Repeat these steps as necessary to resolve all rule conflicts in the rule import operation.
Next steps
See also
• “The Compare Rules Screen” on page 342
IMPORTANT Guidewire recommends that you set this parameter to true in an environment in which you plan to
do initial rule review or rule development only.
See also
• “Business Rules Import at Initial Server Startup” on page 293
Procedure
1. Set configuration parameter BizRulesImportBootstrapRules in config.xml to true.
2. With an empty database, start the ClaimCenter application server.
ClaimCenter populates the database with the bootstrap business rules that exist in the following location in
ClaimCenter Studio:
configuration→config→import→bizrules
Next steps
Thereafter, manage your business rules using the standard rule import and export process, discarding any further use
of the default rules in the bizrules folder.
See also
• “About the import Directory” on page 292
• Configuration Guide
Procedure
1. Temporarily, start the application server with the value of BizRulesDeploymentEnabled set to false.
In this case, ClaimCenter ignores the value of BizRulesDeploymentId and import validation does not stop the
import of the file.
2. After the rule import succeeds, restart the application server with the value of BizRulesDeploymentEnabled
set to true.
Next steps
See also
• Configuration Guide
Administration Tools
System Administration Guide 9.0.5
chapter 23
Server Tools
Guidewire provides server tools to assist you with certain server and database administration tasks.
See also
• “Internal Tools” on page 411
The Batch Process Info screen contains the following areas or tabs that contain batch process information.
Processes Lists all the available batch processing types along with information about each individual batch process. It is also
possible to run certain actions on a selected batch processing type from this screen. See “Processes Table Columns”
on page 350 for more information.
Chart Shows the execution time in seconds and the number of operations performed by the batch process over time in a
graphical format.
History Shows records of past runs of the selected batch process in tabular form.
Note: It is possible to run writers for work queues either from the Work Queue info screen or from the Batch Process
Info screen.
See also
• “Administering Batch Processing” on page 85
• “Work Queues and Batch Processes, a Reference” on page 103
• “Processes Table Columns” on page 350
• “Chart and History Tabs” on page 351
• “Work Queue Info” on page 352
Column Description
Batch Process Name of the batch processing type.
Description Description of the batch processing type.
Action Actions that you can perform on the selected batch processing type. These actions include:
• Run – Runs a batch process. The Run button is active for all batch process types that belong to the Batch
ProcessTypeUsage category UIRunnable.
• Stop – Stops an actively running batch process.
• Download History – Downloads a batch process history report for the selected batch process in HTML
format. See “Download a Batch Process History Report” on page 351 for more information.
You cannot start multiple runs of non-exclusive custom batch processes from the Batch Process Info screen.
Instead, you must use the maintenance_tools command to start multiple runs of non-exclusive custom
batch processes.
Last Run Date on which this batch processing type last run.
Last Run Status Completion status from the last run of this batch processing type.
Next Scheduled Run Date of the next scheduled run for this batch processing type.
Schedule Scheduling actions that you can perform on the selected batch processing type. These actions include:
• Stop – Disables the scheduled runs for the selected batch processing type.
• Start – Enables the scheduled runs for the selected batch processing type.
Column Description
Cron-S M H DOM M Column header stands for seconds, minutes, hours, days of month, month, and day of week.
DOW
See also
• See “Work Queues and Batch Processes, a Reference” on page 103 to determine if it is possible to run multiple
instances of a batch processing type.
• See “Maintenance Tools Command” on page 418 for details of how to start multiple batch processes of the same
type.
• See the Integration Guide for a discussion of the meaning of the Exclusive property on a batch process.
Column Description
Start Requested The time at which ClaimCenter received the request to start the process.
Failure Reason For batch processes that are work queue writers, Failure Reason is the reason that a work item failed processing.
See also
• “The Work Queue Scheduler” on page 93
4. Select the date rage for the records that you want to download.
5. Click Complete Download.
6. Select the location for the local file download and click OK.
7. Unzip the download file into a local directory.
8. Find and double-click index.html to open the report.
Next steps
See also
• “Batch Process Info” on page 349
• “Maintenance Tools Command” on page 418
Column Description
Work Queue Name of the work queue.
Available Number of work items available for processing.
Checked Out Number of work items checked out by workers.
Failed Number of work items that failed during processing.
Executors Running Number of workers processing the work queue.
Column Description
Actions Actions that you can perform on the selected work queue. These actions include:
• Run Writer – Launches the writer to write work items for the work queue.
• Notify – Wakes workers by notifying the executor that there are items to process.
• Stop – Stops the selected work queue.
• Restart – Restarts the selected work queue.
• Download History – Downloads the historical instrumentation data for the work queue, in CSV format.
See also
• See “Worker Task Management” on page 99 for a discussion of the executor function.
• See “The Work Queue History Report” on page 357 for more information on the work queue history report.
Column Description
Process ID ID for the writer process.
Item Creation Time Time at which the writer woke and began writing work items. The first item in the table for a queue has a
creation time that matches the queue’s current Last Execution Time for the Writer value.
Server Name of the server running the work queue.
Scheduled Yes indicates that the start request was the result of the regular execution of a schedule. No indicates that a
user made the request manually.
Number of Items Total work items in the queue regardless of status.
Worker End Time Time at which the last worker completed the last item in the work queue.
Execution Time Time, in minutes, since the start of the process (the execution time so far).
Available Total number of available items in the queue.
Checked Out Number of items checked out by workers for processing.
Succeeded Number of items that completed successfully.
Failed Number of items that failed.
Column Description
Hostname Server on which the executor is running.
Max. Number of Workers Maximum number of workers available to the executor.
Under the By Executors tab is a By tasks tab. The By tasks columns have the following meanings:
Column Description
ID The unique identifier of the task.
Writer The identifier of the writer process.
Success Whether the worker succeeded in the processing of the work items.
Checked out items The number of work items the worker checked out.
Processed items The number of work items the worker processed.
Exceptions The number of exceptions, if any, encountered during item processing.
Orphans Reclaimed The number of orphaned work items the worker adopted for processing.
Column Description
ID Unique identifier of the work item.
Create time Timestamp of the work item creation time.
Available at Timestamp of when the work item is available processing. This value is null for failed work items.
Column Description
Server Server that processed the work item.
Writer Writer that created the work item.
Attempts How many attempts a worker has made to process the item.
Activity ID ID of the activity involved. This field is only visible if you select the Activity Escalation work queue.
Subject Subject of the activity involved. This field is only visible if you select the Activity Escalation work queue.
ClaimCenter shows data on this screen during the active execution of database checks only.
See also
• “Work Queue Reports” on page 355
• “Download a Work Queue Report” on page 356
Next steps
See also
• “Work Queue Reports” on page 355
• “The Work Queue Report” on page 355
Next steps
See also
• “Work Queue Reports” on page 355
Next steps
See also
• “Work Queue Reports” on page 355
• “The Work Queue History Report” on page 357
Type Description
Guidewire Guidewire provides a number of default ClaimCenter logging categories. You can also see the list of
default Guidewire logging categories by running the system_tools command from a command prompt and adding
the -loggercats option.
Guidewire Guidewire provides a number of logging categories that apply to Guidewire internal classes. These logging
internal classes categories start with com.guidewire.*. These logging categories generally look like a fully qualified class
path.
Third-party Guidewire applications integrate with certain types of third-party software. The manufactures of this
software software provide their own logging categories. These logging categories start with org.*, for example, org.a
pache.* or org.eclipse.*. These logging categories generally look like a fully qualified class path.
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user su (in
the base configuration) and you must supply that user’s password.
To use this command, you must supply the name of a specific logger category (logger) and the new logging level
(level) for that logger. Use the system_tools -loggercats command option to see a list of valid ClaimCenter
logger categories.
You must refresh the Server Tools Set Log Level screen after using the system_tools command to see your changes
reflected in that screen.
See also
• “Application Logging” on page 25
• “The Logging Properties File” on page 26
• “System Tools Command” on page 422
View Logs
The Server Tools View Logs screen contains the following items:
Field Purpose
Log File Use to select the log file to view.
Filter Use to display the log file lines that contain the specified word or words.
Field Purpose
Max Lines to Display Use to set the maximum number of lines to show on the screen.
Info Pages
The Server Tools Info Pages provide information to help manage a ClaimCenter server and database. Guidewire
intends these screens for use by Guidewire Support, Integration Engineers, Database Administrators, and System
Administrators to diagnose existing and potential database-related performance problems. You can also use these
screens to review the results of a load operation.
Configuration
The Server Tools Configuration screen lists the values of the configuration parameters in your ClaimCenter
environment. This screen also includes a Download button. Click Download to download a copy of the following
configuration files:
• config.xml
• messaging-config.xml
• scheduler-config.xml
• work-queue.xml
You can find these files in the config folder within the downloaded ZIP file. The ZIP file also includes a current
directory, which includes the in-memory state of config.xml and work-queue.xml parameters on the server.
Guidewire makes the in-memory state available because it is possible to change certain configuration parameters
using a web service or JMX APIs after server startup.
Archive Info
Use the Server Tools Archive Info screen to view information about any archiving processing taken by ClaimCenter.
Note: the value of configuration parameter ArchiveEnabled must be true before you can view the Archive Info
screen.
On the Archive Info screen, you can take the following actions.
Refresh Click to update the information on the Archive Info screen. Clicking the Refresh button also refreshes the IArchi
veSource plugin.
Download Click to download archive information to an HTML report. This report is a summary of the information shown
on the Archive Info screen.
View Progress Click to open the Work Queue Info screen. From this screen you can view the progress of the Archive work queue.
You can also start an unscheduled run of the archive work queue from the Work Queue Info screen. For more
information, see “Work Queue Info” on page 352.
Export Upgrade Click to download and save a .dat file that contains information on the database version for a specific archive
Info operation.
Import Upgrade Click to upload a .dat file that you previously exported. You must browse for a file (click Browse) to upload
Info before the Import Upgrade Info button becomes active.
Browse... Click to open a standard file picker dialog.
See also
• “Configuring Archive Logging Operations” on page 40
• Application Guide
Reset counts Click Reset to set the value of Excluded or Failed items back to zero.
Filter the information by Set a value for Begin Time and End Time, then click View.
time period
Review archived items Select the Archived tab to view a list of items archived with this data model version and for the
specified time period.
Review skipped, excluded, Select the appropriate tab to view the reason that archiving process skipped or excluded an item
and failed items from archiving, or the reason the items failed the archiving process. For each reason, you can click
Reset All Items to reset the count to zero.
IMPORTANT Guidewire strongly recommends that you review the Warnings tab for possible issues every time you
change the data model.
Consistency Checks
Use the Server Tools Consistency Checks screen to view and run consistency checks on the ClaimCenter database.
The screen consists of two tabs:
• Run consistency checks
• View consistency checks definitions
The Server Tools Consistency Checks screen executes the Database Consistency Check batch process. See “Database
Consistency Check Batch Processing” on page 113 for more information.
Note: If the server running a database consistency check fails for some reason, ClaimCenter provides for a graceful
recovery. After the server becomes operational again, ClaimCenter starts the check at the place it last left off and
completes the check.
Item Action
Download all Click to download the consistency check results for all selected consistency check runs in the results table.
selected See “Run a Consistency Check from ClaimCenter” on page 364 for information on how to run a consistency
check.
Delete Click to delete the results of prior consistency check runs for those rows with a check mark in the results
table.
Refresh Click to update the information on the screen while processing is active.
Run Consistency Click to submit a batch processing job to perform one or more database consistency checks. This action
Checks executes the Database Consistency Check batch process. See “Database Consistency Check Batch
Processing” on page 113 for more information.
This button is not available if the Database Consistency Check work queue is not active,
See also
• “Run a Consistency Check from ClaimCenter” on page 364
• “Run a Consistency Check Using System Tools” on page 365
Pause/Resume Click to pause a currently executing batch processing job. During a pause in processing, ClaimCenter
Consistency changes the button label to Resume. Clicking the button resumes processing execution. If this button is not
Checks visible, click the Refresh button.
Cancel Consistency Click to cancel all currently executing batch processes. ClaimCenter displays this button only if there is a
Checks currently executing consistency check.
All tables Click to select All tables (default) to run consistency check against all database tables.
Specify tables If you select Specify tables, ClaimCenter opens a table picker from which you can select the database tables
Specify table against which the consistency checks run. You must select at least one table.
groups If you select Specify table groups, ClaimCenter opens a table groups picker from which you can select the
table groups against which the consistency checks run. You must select at least table group.
Change Click to change the number of workers used to execute the consistency check. Enter a positive integer
value. If you change the number of workers in an active work queue, ClaimCenter stops the existing work
queue and restarts it with the new number of workers.
This button is not available if the Database Consistency Check work queue is not active,
Check all types? Click to select Specify Types to see the list of available check types from which you can make a selection of
Column Description
Download Click the Download icon to download a ConsistencyCheckRundate.zip file that contains the set of database
reports generated by this consistency check run.
Download Errors Click the Download Errors icon to download a ConsistencyCheckRunErrorsOnlyRunDate.zip file that contains
the consistency check log file and stack trace. ClaimCenter displays this column only if there are SQL errors in
the database consistency check.
See “Correct a Consistency Check SQL Failure” on page 365 for more information.
View Click the View icon to open a pop-up from which you can view the same reports contained in the Consistency
CheckRundate.zip file, after you supply your user credentials. ClaimCenter displays this column in test and
development mode only. The column is not visible in production mode.
Delete Click the Delete icon to remove a consistency check row from the table.
Rerun Rerun a consistency check that has a SQL error. ClaimCenter displays this button only if there are SQL errors in
the database consistency check. See “Correct a Consistency Check SQL Failure” on page 365 for more
information.
Description List of tables against which the consistency checks ran.
With Errors Number of errors encountered by the consistency checks run.
Total Checks Total number of consistency checks that ran.
Not started Number of consistency checks that have not yet started in the current consistency checks run. Click the Refresh
button to update this data during a currently executing check run.
In progress Number of actively executing consistency checks at any given moment.
Finished Number of consistency checks that completed in this run.
Start time Time at which this set of consistency checks started.
End time Time at which this set of consistency checks ended.
Duration Length of time that this set of consistency checks took to run.
Version Database version, listing (in order):
• Application major version
• Application minor version
• Platform major version
• Platform minor version
• Data model version
See “Understanding Guidewire Software Versioning” on page 395.
ID Identifier (ID) of the stored results of this consistency check run.
If the consistency checks results table lists multiple check runs, use the check box next to a table row to select a
consistency check run for further action.
See also
• “The View Consistency Checks Definitions Tab” on page 363
• “Run a Consistency Check from ClaimCenter” on page 364
• “Run a Consistency Check Using System Tools” on page 365
• “Correct a Consistency Check SQL Failure” on page 365
Action Description
Download Click Download to download a ZIP file that contains a set of linked HTML files that describe the consistency
consistency check checks that Guidewire provides in the ClaimCenter base configuration.
information
Search by table Search by table name to find the consistency checks related to a specified table. Most consistency checks
name operate on the specified table, but some checks, such as typelist table checks, operate on other tables as
well.
To search, enter a complete or partial table name in the Table name fragment field and click Search. The results
of the search show in a table that lists the table name, the consistency check name, and a description of
the consistency check.
To clear the results of the search, click Reset.
Filter by check Filter the list of consistency check types to see a list of all tables for which that consistency check is
type available.
View SQL query Review the SQL query used to generate a given database consistency check. First, select a consistency
check, then:
• Select the Command tab at the bottom of the screen to view the SQL command of the consistency
check. The SQL command retrieves a count of rows that violate the consistency check.
• Select the Query to identify rows tab at the bottom of the screen (if available) to view the SQL query used
to identify rows that violate the consistency check. SQL queries to identify rows that violate consistency
checks are not available for all check types.
See also
• “The Run Consistency Checks Tab” on page 362
• “Run a Consistency Check from ClaimCenter” on page 364
• “Run a Consistency Check Using System Tools” on page 365
• “Correct a Consistency Check SQL Failure” on page 365
Procedure
1. Navigate to the Server Tools Consistency Checks screen.
2. Select the Run Consistency Checks tab.
3. (Optional) Specify any, or all, of the following items:
• Description that ClaimCenter prepends to the standard description of the tables and checks in the
consistency check reports.
• Tables or table groups on which to run the consistency checks.
• Number of workers to use in executing the consistency check.
• Type of consistency check to run.
4. Click Run Consistency Checks.
5. After the batch process completes, select one of the following in the summary table:
Download arrow Downloads a Zip file that contains the full set of consistency check reports.
Download Errors arrow Downloads a Zip file that contains only the consistency checks that contain SQL errors.
View icon Opens a pop-up window from which you can view the full set of consistency check reports.
6. If you downloaded the consistency check file to your local system, unzip the file into its own directory.
7. Locate the index.html file and double-click it to open it in a browser.
Next steps
See also
• “The Run Consistency Checks Tab” on page 362
• “The View Consistency Checks Definitions Tab” on page 363
• “Run a Consistency Check Using System Tools” on page 365
• Installation Guide
Procedure
1. Open a command prompt.
2. Navigate to the following location in the ClaimCenter installation:
admin/bin
You must supply a value for password. You can optionally supply additional parameters for the -
checkdbconsistency option.
Next steps
See also
• “System Tools Command” on page 422
• “The Run Consistency Checks Tab” on page 362
• “The View Consistency Checks Definitions Tab” on page 363
• “Run a Consistency Check from ClaimCenter” on page 364
Procedure
1. In the ClaimCenter Server Tools Consistency Checks screen, click Download Errors next to the check that
generated the error.
2. Open the error report:
a. Click the number in the With Errors column.
b. On the details report that opens, click Details.
Next steps
See also
• “The Run Consistency Checks Tab” on page 362
• “The View Consistency Checks Definitions Tab” on page 363
• “Run a Consistency Check from ClaimCenter” on page 364
• “Run a Consistency Check Using System Tools” on page 365
Upgrade
Guidewire recommends that you run all consistency checks before and after a database upgrade. Running these
checks before and after a database upgrade helps to verify the validity of the data and identify potential issues.
See also
• “Run a Consistency Check from ClaimCenter” on page 364
• “Run a Consistency Check Using System Tools” on page 365
Verify Click to have ClaimCenter compare the database schema with the schema defined in the data model
metadata files. ClaimCenter shows any errors that it finds on the screen.
Download Database Click to download the results of the database schema verification. If there were no errors, the report
Schema Verification is empty.
Errors
Download Database Table Click to download a ZIP file containing a number reports that document each table in the ClaimCenter
Info data model.
See also
• “Database Table Info Reports” on page 366
• “Understanding the Database Table Info Reports” on page 368
• “View the Database Table Info Reports” on page 368
Screen Description
All Tables Provides information about all ClaimCenter tables.
Guidewire Version Lists schema version and build information for ClaimCenter.
Indexes by Table Lists the indexes on a table and provides information about the associated key
columns.
Spatial Indexes Provides information about the spatial indexes.
Primary Key Constraints by Table Lists the primary key constraints on tables and provides information about the fields
that reference the keys.
Foreign Key Constraints by Table Lists the foreign key constraints on tables and provides information about the tables
referenced by the keys.
Typekey Columns by Typelist Lists the referencing typekey columns for each typelist.
Number of columns and min/max row lengths Displays the number of columns and categories of columns and the minimum and
maximum row length in each table. Overly large row lengths in a database can lead to
inefficiencies in data queries.
Possibly Redundant Backing FK Indexes Lists foreign key indexes that may be redundant, including information about whether
the index is unique and if it is an extension.
Indexes with Shared Prefixes Lists indexes that share multiple leading key columns. It is possible to use this
information to find redundant indexes.
Indexes with the Same Key Columns Lists indexes that have the same key columns.
Indexes without a Description Lists indexes that do not have a description.
Indexed Views Lists any indexed views and the view definitions.
Event Paths from Tables to Listening Objects Shows paths from event-generating entities to non-event-generating entities. Each
row in the table contains a non-event-generating entity E, along with one of the paths
from an event-generating entity to E. ClaimCenter uses each path to generate a query
to find the event-generating entity instances that reference an instance of the non-
event-generating entity.
Event Paths to Listening Tables Shows the same paths as those in the Event Paths from Tables to Listening Objects report,
but the entities in the second column are the event-generating entities. Each entry in
the table contains an event-generating entity E, along with a path from E to a non-
event-generating entity.
Instrumentation Queries Lists the queries that ClaimCenter executes against the database while building the
data the comprises the download reports.
ClaimCenter copies table-specific information from each report category to the individual report for the respective
table, excepting the All Tables and Instrumentation Queries reports.
Configuration Files
The download ZIP file contains several directories of configuration files:
• The config directory contains a number of ClaimCenter configuration files as defined at server startup.
• The current directory contains the in-memory state of the batch-process-config.xml, config.xml, and
work-queue.xml parameters on the server. Guidewire makes the in-memory state available because it is possible
to change certain configuration parameters using a web service or JMX APIs after server startup.
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the Server Tools Info Pages Database Table Info screen.
3. Click Download Database Table Info.
4. Save the download ZIP file.
5. Unzip the ZIP to a local directory.
6. Find and double-click file index.html to view a table of contents for the downloaded reports in a browser
window.
7. Do one of the following:
• Click a report type to open that report.
• Click config_files on the table of contents page to access copies of certain metadata configuration files.
Database Parameters
The Server Tools Database Parameters screen displays information about the database configuration. You can view the
information on-screen, or you can download a set of linked HTML reports that contain the same information.
To download and access the HTML reports, click Download Database Parameters Info, unzip the resulting file, and click
index.html.
To view the information on-screen, select a view type from the View drop-down list. Depending on the database type,
the View list contains the following items.
View Lists
Database and Driver Versions of the database and its associated driver.
Database Connection Pool Settings Connection pool settings as configured in database-config.xml if using
ClaimCenter to manage the connection pool. See “The dbcp-connection-pool
Database Configuration Element” on page 225 for more information on these
parameters.
If you use the application server to manage the connection pool, then this
screen does not show connection pool parameters. Instead, tune the
connection pool by using the Administrative Console of the application server.
Database Connection Properties Properties related to the database connection.
View Lists
Guidewire Database Config Guidewire-specific database configuration parameters. The table lists the <data
base> element attributes specified in file database-config.xml or lists the
default value if file database-config.xml does not specify a value.
Guidewire Database Config Statistics Settings Guidewire-specific database configuration parameters related to statistics
gathering.
Guidewire Database Upgrade Configuration Guidewire-specific database configuration parameters related to upgrade as
defined by the <upgrade> element in file database-config.xml.
Guidewire Version Information about this specific version of ClaimCenter.
Linguistic Search Options Options defined for linguistic searching in ClaimCenter. See the Globalization
Guide for more information.
Linguistic Search Oracle Functions and Java Source Oracle functions, and Java source, for linguistic searching.
SQL Server Server Global Server Settings Global server settings for the SQL Server instance.
SQL Server Server Instance Attributes and Values Attributes and values for the SQL Server instance connected to ClaimCenter.
SQL Server Session Properties Properties of the SQL Server session for the current connection with
ClaimCenter.
Summary of Queries Executed to Build Download Number of queries used, and the execution time, to generate the information
in the HTML download reports.
Database Storage
The Server Tools Database Storage Information screen provides information about the space and memory taken up by
the database on the ClaimCenter server. You can both view and download database storage information from this
screen.
The following tables lists the filtering options for the database storage information:
After setting the desired filtering options, chose one of the following:
• To see the database storage information on the current ClaimCenter screen, click Display Database Storage Info.
• To download the database storage information to view later, click Download Database Storage Info.
After you click Display Database Storage Info, the screen shows information at the bottom of the screen, along with a
drop-down that you can use to filter the information.
IMPORTANT Guidewire recommends that you download and save the database storage information immediately
before and after an upgrade or other significant database change. This information sets a point of reference that you
can provide to Guidewire Support if requested.
Oracle User LOBs Space allocation information similar to that shown for Oracle LOBs Alloc Space, sorted by table name.
Queries Executed to List of SQL queries used to generate the data. The data includes the SQL used to generate the query and
Build Download other information such as the number of rows returned the time it took for the query to run.
Summary of Queries Simple summary of the number of queries involved in generating the data and the total database time
Executed to Build that the queries took to execute.
Download
Table Alloc Space + Size in megabytes for each table in the database.
Estimated Advanced
Compression Settings
Data Distribution
Use the Server Tools Data Distribution screen to a run batch processing job that generates data on the distribution of
various items in the database. You can then view this information on-screen or download a set of reports that details
this information.
There are multiple categories of data distribution reports.
Comparison The report shows data for the various items selected for inclusion in any of the comparison reports. It also
reports shows the individual row count for the data, as of that date. The intent of this report is to provide a way to
visualize data growth. The download ZIP file contains an HTML report plus separate CSV reports. The HTML
report contains distinct columns representing the data from each report used for comparison.
Combined The report combines information from multiple runs. The intent of the report is to provide a way to create
reports smaller reports that require less generation time and then combine the information into one report. The
download ZIP file contains an HTML report that contains information on each of the previous reports combined
into this report plus tables that contain the combined data.
It is also possible to start the data distribution batch process (DataDistribution) directly from the command
prompt by using a maintenance_tools command option.
• “Generate and View a Data Distribution Report” on page 372
• “Download Comparison and Combined Data Distribution Reports” on page 372
• “Maintenance Tools Command” on page 418
Procedure
1. Navigate to the Server Tools Info Pages and select Data Distribution.
2. Select from the available options listed under Data Distribution Batch Job Parameters.
3. Click Submit Data Distribution Batch Job.
4. After the batch job completes, select one of the following in the summary table:
Download arrow Downloads an DataDistribution.zip file that contains the set of database reports.
View icon Open a pop-up window from which you can view the same reports contained in the DataDistributio
n.zip file, after you supply your user credentials.
5. If downloaded the report to your local system, unzip the download Zip file into its own directory.
6. Locate the index.html file and double-click it to open it in a browser.
7. Use the links on the screen to navigate through the distribution reports.
Next steps
See also
• “Data Distribution” on page 372
• “Download Comparison and Combined Data Distribution Reports” on page 372
Procedure
1. Navigate to the Server Tools Info Pages→Data Distribution screen.
2. In the summary table, select (check) at least two of the generated reports.
ClaimCenter enables the following download buttons:
• Download Comparison Zip File
• Download Combined Zip File
3. Click the appropriate button.
Database Statistics
The Server Tools Database Catalog Statistics Information screen provides reports about out-of-date statistics in the
database indexes, histograms, staging tables, and ClaimCenter application tables. This screen is not available with
the development-only QuickStart database.
The Database Statistics screen contains the following tabs.
Tab Description
Database Statistics Use to generate and download database statistics reports for the entire database or for specific tables. See
Information “Generate and Download a Database Statistics Report” on page 374.
Execution History Use to view information about individual database statistics reports and take action with respect to each
report. See “Working with Database Statistics Reports” on page 374.
Oracle Statistics Use to work with Oracle database preferences for database table statistics. This tab is only available after
Preferences you set the useoraclestatspreferences attribute to true and perform an application upgrade.
See “Using Oracle AutoTask for Statistics Generation” on page 287 for more information.
Development Mode
In development mode, it is possible to run Database Statistics batch processing in any of the following ways:
• From a command prompt, using the -updatestatistics option of the system_tools command
• From the Execution History tab of the Server Tools Database Statistics screen
• As a scheduled batch process
Production Mode
In production mode, it is possible to run Database Statistics batch processing in the following ways only:
• From a command prompt, using the -updatestatistics option of the system_tools command.
• As a scheduled batch process
Oracle AutoTask
For Oracle databases, it is possible to use the Oracle Autotask infrastructure to manage the collection of database
table statistics. To use Oracle AutoTask, do the following:
• Disable any scheduled runs of DBStats batch processing.
• Set attribute useoraclestatspreferences attribute on the <databasestatistics> element in file database-
config.xml to true.
You must use either Oracle AutoTask or DBStats batch processing to manage the collection of database statistics.
Do not attempt to use both methods simultaneously.
See also
• “Understanding Database Statistics” on page 279
• “Managing Database Statistics using System Tools” on page 282
• “Using Oracle AutoTask for Statistics Generation” on page 287
• “Generate and Download a Database Statistics Report” on page 374
• “Working with Database Statistics Reports” on page 374
• “System Tools Command” on page 422
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the Server Tools Info Pages→Database Catalog Statistics Information screen.
3. Select one of the following values for the View database catalogs statistics on all tables option:
Yes View statistics data for all database tables. This selection can take a few seconds to complete, depending on the
size of your database.
No View statistics data for the selected tables only. If you select this option and do not select any tables, ClaimCenter
shows statistics data from the database metadata only.
4. Click Download.
5. Save the report ZIP file to a local directory.
6. Unzip the file.
7. Double-click file index.html to open the HTML report summary.
8. Use the report links to navigate through the various database statistics reports.
Next steps
See also
• “Generate and Download a Database Statistics Report” on page 374
• “Working with Database Statistics Reports” on page 374
Action Description
Refresh Update the contents of the summary table.
Run Incremental Statistics Generate incremental database statistics. This action updates database statistics for tables exceeding
the change threshold only. This functionality is available in development mode only.
Run Full Statistics Generate full database statistics. This functionality is available in development mode only.
Download Click the download arrow to download the data from this report.
Delete Click the trash can icon to delete this data row from the summary table.
It is also possible to set table statistics preferences directly in the Oracle database by executing the following
command:
DBMS_STATS.SET_TABLE_PREFS
If you set table statistics preferences directly, the actual table statistic preferences no longer match the configured
table statistics preferences. During a database upgrade, BillingCenter ignores any actual table statistics preferences
and sets table statistics preferences to those defined in the new database-config.xml file. The information
provided on the Oracle Statistics Preferences tab provides a mechanism to capture direct changes made to the table
statistics preferences that are lost during a database upgrade
Button Actions
The following list describes the actions of the individual button in the Oracle Statistics Preferences tab.
Button Action
Refresh Refreshes the table information on the screen.
Download Downloads the database-config table statistics options as a JSON file.
Reapply Config Reapplies the table statistics options configured in file database-config.xml to the database,
discarding any preferences set outside the application. As you initiate this process,
BillingCenter inserts an entry into the application log and then inserts another entry at the
end of the process.
Generate Config to Match Actual Generates an XML file that details the table statistics preferences actually in use in the
database.
Dev Mode only (Drop-down) Selects that environment in which to generate the table statistics.
You can also filter the list of tables using the drop-down at the right of the tab.
Oracle Statspack
The Server Tools Oracle Statspack screen provides a means to download HTML reports based on any two Oracle
database statspack snapshots from the same instance start-up time. You create statspack snapshots by using a tool
such as SQL*Plus or SQL Developer. Also, Oracle provides a script called spauto.sql that you can modify and run
to automate statspack snapshot gathering.
The Server Tools Oracle Statspack screen is available only if the database server is Oracle. For ClaimCenter to display
statspack information, you must also:
• Install the statspack package in your Oracle database (spcreate.sql).
• Grant SELECT privileges on all the PERFTEST tables to the ClaimCenter database user.
If you do not install the statspack or fail to grant the correct permission, ClaimCenter displays an error message on
the screen. You cannot select any snapshots on the screen either. Refer to the Oracle documentation for statspack
installation instructions.
Guidewire Recommendations
Guidewire recommends the following with regards to Oracle statspack snapshots:
• Collect statspack snapshots at regular intervals if you do not have a license for Oracle AWR.
• Collect the snapshots at level 7 or higher to collect execution plan and segment statistics.
• Regularly purge statspack snapshots older than a certain period.
Guidewire recommends that you use the Guidewire AWR tool, rather than Oracle Statspack, if you have a license
for the following:
• Oracle Diagnostics package
• Tuning package
If you do not have these licenses, Guidewire recommends that you acquire the two licenses.
See also
• “Oracle AWR” on page 377
Oracle AWR
The Server Tools Oracle AWR Information screen is available only if the database server is Oracle. Use the Oracle AWR
Information screen to generate a set of Guidewire performance reports using AWR snapshots that you define in the
database. The Guidewire AWR reports provide a view of the database activity that contains more detailed
information than the Oracle Standard AWR report available with the Oracle database.
The Guidewire AWR reports provide the following additional information:
• Messaging analysis
• Concurrent batch processes
• Distributed worker activity
• Database statistics
The Guidewire AWR reports require that you have a license for the following:
• Oracle Diagnostics package
• Tuning package, if you select either Probe in Memory SQL Monitoring or Probe on Disk SQL Monitoring
Refer to the Oracle documentation for details.
See also
• “Oracle Statspack” on page 376
• “Download an Oracle AWR Unused Indexes Report” on page 379
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the Server Tools Info Pages→Oracle AWR Information screen.
3. Set the options for the report.
See “Automatic Generation of Oracle Standard AWR Reports” on page 379 for a discussion of the Include
native Oracle report option.
4. Select two database snapshots from the list at the bottom of the screen.
Each snapshot must share the same Oracle instance startup time.
5. Click Generate Perf Report.
6. After ClaimCenter completes generating the report, select one of the following:
Download arrow Downloads an AWRReport.zip file that contains the set of database reports.
View icon Open a pop-up window from which you can view the same reports contained in the AWRReport.zip
file, after you supply your user credentials.
Procedure
1. Open a command prompt.
2. Navigate to the following location in the ClaimCenter installation:
admin/bin
You must enter a value for password. You must limit the list of snapshots by entering a value for numSnaps.
4. Enter the following command to generate the Guidewire AWR report:
To run the ClaimCenter command prompt tools, you must supply a user name and password for a user with
administrative privileges. If you do not supply a value for the -user parameter, the command defaults to user
su (in the base configuration) and you must supply that user’s password.
You must enter the IDs of two database snapshots.
Result
The system_tools -oraPerfReport command option reports the process ID of the process generating the
performance report. You can check on the status of this process using the -processstatus option of the
maintenance_tools command.
Next steps
See also
• “System Tools Command” on page 422
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the Server Tools Info Pages→Oracle AWR Information screen.
3. Generate and download an AWRReport.zip file as described in “Generate Guidewire AWR Reports” on page
377.
4. Extract the contents of the AWRReport.zip file to a local directory.
oraclescripts.sql
Procedure
1. Log into Guidewire ClaimCenter using an administrative command.
2. Navigate to the Server Tools Info Pages→Oracle AWR Unused Indexes Information screen.
3. Set the options for the report.
4. Select two database snapshots from the list at the bottom of the screen.
Guidewire recommends that you select snapshots that have a wide range.
5. Click Download.
6. In the download dialog, set the download location.
7. Click OK to download and save the report.
8. Open the downloaded ZIP file and click index.html to open the index to the linked set of report files.
Oracle Outlines
The Server Tools Oracle Outlines screen is only available if the database server is Oracle. The screen contains
information on any Oracle outlines defined in the ClaimCenter Oracle database. Oracle defines an outline as a
collection of hints associated with a specific SQL statement, used to provide SQL execution plan stability. Consult
the Oracle documentation on how to write and use Oracle Outlines.
The Oracle Outlines summary table contains the following information:
Name Name of the Oracle outline. Click to open the Outline Details screen. This screen contains the SQL hints used to
define the outline.
Category Optional name used to group stored outlines.
Used Whether the Oracle database has ever used this particular outline.
Time Stamp Date and time of the last use of this outline.
Note: Oracle AWR contains a column that indicates if ClaimCenter used an Oracle Outline for a given SQL
statement.
See also
• “Oracle AWR” on page 377
• “View an Oracle Outlines Report” on page 380
Procedure
1. Log into Guidewire ClaimCenter using an administrative command.
2. Navigate to the Server Tools Info Pages→Oracle Outlines screen.
3. Click Download.
4. In the download dialog, set the download location.
5. Click OK to download and save the report.
6. Open the downloaded ZIP file and click Outlines.html to open the report.
Next steps
See also
• “Oracle Outlines” on page 380
Procedure
1. Log into Guidewire ClaimCenter using an administrative report.
2. In the left-hand navigation pane, navigate to Server tools Info Pages→SQL Server DMV Snapshot.
3. (Optional) Select Include Database Statistics if you want to include database statistics information in the report.
4. Click Generate Perf Report.
This action launches an internal batch process that gathers performance data and creates the report.
5. After the batch process completes, select one of the following in the summary table:
Download arrow Downloads an DMVReport.zip file that contains the set of database reports.
View icon Open a pop-up window from which you can view the same reports contained in the DMVReport.zip
file.
6. After you download the DMVReport.zip file, unzip the file into its own directory.
7. Locate file index.html and double-click it to open it in a browser.
8. Use the links on the screen to navigate through the distribution reports.
Note: Using this screen is a better option if tracing a particular operation, in order to minimize system impact and
size of the trace file.
See also
• Installation Guide
Load History
The Server Tools Load History Information screen displays information about specific ClaimCenter database operations.
For example, loading data into the staging tables impacts the loader history information on this screen.
On this screen:
• Click Refresh to reload and update the table data.
• Click Edit to make the Description field for each load history writable.
The load history summary table contains the following information:
Download Click the Download arrow to download a LoadHistoryInfo.zip file that contains a set of HTML reports. The
reports consist of a summary table and a set of links to individual reports that load different views of the
database operation data onto the screen. These are the same reports that are available by clicking the View
icon.
View Click the View icon to view the on-screen Load History Detail report for the selected database operation. The
detail view consists of a summary table and a set of tabs that load different views of the database operation
data onto the screen. These are the same reports that are available for download by clicking the Download
icon.
Delete Click the trash can icon to remove the data for this database operation.
Load Operation Type of database operation. This can be, for example, any of the following:
Type • Database table load operations
• Staging table clearing operations
• Database statistics generation operations
Start Time Start date and time of the database operation.
End time Completion date and time of the database operation.
Duration Length of time, in seconds, to complete the operation.
Error Count Number of reported errors for this database operation..
Calling User Name of user who initiated this database operation.
Description Click Edit to make the Description field for each database operation row writable. Click Update after entering
text to save your work and update the field.
See also
• “View a Load History Report” on page 382
• “The Load History Detail Report” on page 383
Procedure
1. Log into Guidewire ClaimCenter using an administrative account.
2. Navigate to the Server Tools Load History Information screen.
3. For the database load operation that interests you, select one of the following in the summary table:
Download arrow Downloads a LoadHistoryInfo.zip file that contains the set of database reports.
View icon Opens a new screen that displays the Load History Detail report. See “The Load History Detail Report” on
page 383 for details.
4. If you downloaded the report file, unzip the file into its own directory.
5. Locate the index.html file and double-click it to open it in a browser.
6. Use the links on the screen to navigate through the linked reports.
Parameters Lists the values of the configuration parameters used in generating the data for the database operations
reports.
Steps Lists the individual steps in the data load operation. Click a step to view detail data about that step.
Row Counts Lists information about database tables impacted by the data load. Use the information on this screen to
quickly assess whether the amount of data loaded by operation was the amount that you expected the
operation to load.
Integrity Checks Lists the integrity checks that ClaimCenter ran against each affected database table before the data load
operation.
Inserts Lists the results of the SQL INSERT_INTO queries that ClaimCenter ran against the affected database tables.
Callbacks Lists the operations that ClaimCenter executes before and after a staging table load operation.
Statistics Lists the SQL commands used to generate database statistics.
Commands
Parameter Modification
The following table describes how to set the database parameter values.
See also
• “Load History” on page 382
• “The Load History Detail Report” on page 383
• “Import Tools Options” on page 417
• Integration Guide
View by Staging Table This tab lists the set of integrity checks that ClaimCenter executes against a specific staging table. Use
the Staging Table drop-down to select the table of interest.
View by Load Error This tab lists the set of tables against which ClaimCenter runs a particular integrity check. Use the Load
type Error Type drop-down to select a specific integrity check.
In both tabs, you are able to set the value of Allow Non Admin References to true. If you do, ClaimCenter checks foreign
key references to administrative tables on load, such as users and groups. In the base configuration, Guidewire
disables these references by default.
See also
• Integration Guide
Load Errors
The Server Tools Load Errors screen displays errors generated by failed integrity checks. You can use this screen to
drill down through a table name to the specific error generated by a load operation. Errors relate to a particular
staging table row. For each error, the Load Errors screen shows:
• The table
• The row number
• The logical unit of work ID (LUWID)
• The error message
• The data integrity check query that failed.
In some cases, ClaimCenter cannot identify or store a single LUWID for the error. For example, this might happen
for some types of invalid ClaimCenter financials imports.
Column Description
Entity Entity name
Order Entity ranking order
Table Entity database table name
Preupdate Whether the entity triggers a Preupdate rule set
To see only those objects affected by Preupdate rules, select With rules only from the drop-down filter.
Preupdate Rules
In running the rules in the Preupdate rule set, ClaimCenter first computes the set of objects on which to run the
Preupdate rules. ClaimCenter then runs the Preupdate rules for this set of objects as determined by the order listed in
the table Order column.
Serialization Info
The Server Tools Serialization Info screen (under Info Pages) shows, for any specific server in the cluster, the entire set
of Java objects (classes) deserialized by that server instance. This screen contains an optional filter (Including listed in
the serialization whitelist classes) that filters the list of classes:
• Checking this box means that the list of class names includes the names of all Java classes encountered and
deserialized by the local server. This list includes the names of classes that exist in the serialization white
(permitted) list as well.
• Un-checking this box means that the list of class names includes only the names of classes encountered and
deserialized that are not on the serialization white list. Guidewire recommends that you add these classes to the
serialization whitelist. After you complete your whitelisting of Java classes, the class listing will be empty.
#Incorrect example
gw.api.*.myClass
• Use blank lines and leading spaces as desired to enhance readability of the file.
Management Beans
Note: The Management Beans screen is accessible to users with the soapadmin permission only.
The Server Tools Management Beans screen lists ClaimCenter management beans, which are ClaimCenter objects that
represent different resources. Click the name of a resource to open the Guidewire Managed Bean Properties screen for
the selected resource.
The Bean Properties screen most often contains a MBean Property table that lists the properties associated with the
selected resource bean. Property values are either read-only or editable. Guidewire marks the editable properties
with a small blue triangle in the upper left-hand corner of the Value field. Click anywhere in an editable field to make
that field editable. After modifying a field, you can either save your work or cancel your changes by clicking Save or
Cancel.
A few Bean Properties screens also contain an Operation table that lists available operations associated with this
resource bean. Click Execute to execute a selected operation. The Result field shows the result of the operation.
Startable Services
The Server Tools Startable Services screen contains summary information on all of the startable plugin services in the
ClaimCenter cluster. A startable plugin is a special type of ClaimCenter plugin. At runtime, ClaimCenter creates a
background process, or service, for each startable plugin. The summary table provides the following information for
each service.
See also
• Integration Guide
See also
• “The Cluster Components Screen” on page 391
• “Download a Server Component Report” on page 392
• “Schedule a Planned Cluster Member Shutdown” on page 392
Host Name of the machine on which this ClaimCenter server instance is running.
Server ID Name for this server instance. You specify the server ID by either adding an entry for this cluster member in the <
registry> element in config.xml, or, by setting a JVM option at server startup. If you do not specify a server ID,
ClaimCenter uses the host (machine) name as the server ID.
UUID Universally Unique ID for this server machine. ClaimCenter randomly generates a UUID for each machine at each
machine startup.
Server Roles List of server roles assigned to this ClaimCenter server.
See also
• “Understanding the Configuration Registry Element” on page 46
• “Server Roles” on page 141
Banner text Choose the text of the banner message to show on the ClaimCenter screen to warn
users of the server shutdown. The banner message occurs immediately as you
schedule the shutdown and contains a countdown to the time of the scheduled
shutdown. You can choose from several listed messages or create your own custom
message.
Shutdown date and time Set the date and time of the server shutdown.
Action to take with respect to running Decide how to handle any currently running batch processes
batch processes
After you initiate a scheduled shutdown, ClaimCenter does the following on the affected server:
• It does not start any new batch processes.
• It manages the stopping of any currently running batch processes depending on the setting for Terminate Batch
Processes field.
• It requests that all work queues and message destinations stop processing immediately.
• It completes the transmission of any current messages without starting any new message transmissions.
• It completes the processing of any current work items without beginning work on any new work items.
After you click OK in the Schedule Planned Shutdown confirmation dialog, ClaimCenter reopens the Cluster Members
screen. In this screen, you now see the following:
• The Actions entry for the affected server contains a Cancel planned shutdown button.
• The Planned Shutdown entry for the affected server provides information about the planned shutdown.
• The screen contains a banner with the chosen shutdown message and a countdown timer to the server shutdown
date and time.
Web.TabBar.SystemAlertBar.PlannedShutdown.RollingUp Rolling upgrade in progress, please save your work and log out
gradeMessage to redirect to a new server.
Web.TabBar.SystemAlertBar.PlannedShutdown.ScaleInMe Please save your work and log out to re-direct to the new
ssage server.
After you initiate a planned shutdown, ClaimCenter stores only the current time and the shutdown time in the
database. It stores other information, such as the shutdown message itself, in a single-threaded atomic reference in
memory. ClaimCenter clears this message reference under the following conditions:
• At the restart of the shutdown ClaimCenter server
• At the cancellation of the planned server shutdown
In either case, ClaimCenter does not show a Planned Shutdown status of ready on the Cluster Members screen until all
the affected processes have stopped on the server.
Note: Batch processes run as non-daemon threads. As such, it is not possible for the server to perform a graceful
shutdown if any batch processes are still running on the server at the time of the shutdown.
For more information on the use of the system_tools command, see “System Tools Command” on page 422. For
information on the use of the SystemToolsAPI web service, see the ClaimCenter Integration Guide.
Last Update Date and time of the last update on this server instance.
Planned Date and time of any planned shutdown of a cluster member.
Shutdown
Actions Action button to manage a planned shutdown of a cluster server instance. The button label is either Start
Planned Shutdown or Cancel Planned Shutdown.
You can also initiate a server shutdown using the administrative system_toools command option -schedul
eshutdown. See “System Tools Command” on page 422 for details.
See also
• “The Cluster Components Screen” on page 391
• “Schedule a Planned Cluster Member Shutdown” on page 392
You can view similar information to that of the Components table on the Cluster Components screen.
Transfer Date and time at which the server instance received a request to transfer this component to another server
Requested instance.
Transfer Target Server ID of the server to which the transfer request was made.
Retry Failover Date and time of the deadline in which to complete the failover process.
If a lease manager detects the expiration of a component lease owned by another cluster member, the first
lease manager starts a failover process for this lease. This failover process is not instantaneous. It is possible
that during the failover process the cluster member performing the failover itself might fail, or lose database
connection, or encounter other potential problems.
To be able to handle this situation gracefully, the cluster member starting the failover gives itself a deadline to
complete the failover process. If the current failover process does not finish by the specified timestamp, some
other cluster member needs to start the failover process for the same component lease.
See “Automatic Failover of a Component Lease” on page 161 for a description of how ClaimCenter calculates
this timestamp.
You can view similar information to that of the Components table on the Cluster Members screen, the Components tab.
Action Description
Download cluster server Click Download to download an HTML report of the cluster components and component history.
report See “Download a Server Component Report” on page 392 for details.
Filter by component type Select a component type from the Types drop-down list to filter the information by a specific
component type.
Filter by component state Select a component state from the State drop-down list to filter the information by a specific
component state.
Filter by component Click Filter by Component to open a Select Components screen. In this screen, you can select
individual components of all available types to show in the component table.
Refresh the component Click Refresh to update the server component information in the table.
information
Review component history Click the name of a component to open a component history detail screen.
detail
Procedure
1. Navigate to the Server Tools Cluster Members screen.
2. Find the table row that corresponds to the cluster member for which you want to schedule a shutdown.
3. If necessary, scroll the screen all the way to the right until you see the Actions column.
4. Click Start Planned Shutdown.
5. In the Schedule Planned Shutdown screen, select the message that you want to show to the application user about
the planned shutdown.
Select one of the provided messages, or, enter custom text in the provided field.
6. Select the date and time of the planned shutdown.
7. Decide how you want ClaimCenter to handle the currently running batch processes (including work queue
writers) on the server.
See “About Planned Server Shutdowns” on page 387 for more information on the option.
8. Click Schedule Shutdown.
Result
After you schedule the shutdown, the Cluster Members screen shows the following for your selected cluster member:
• The Planned Shutdown column shows the date and time that you initiated the planned shutdown. It also shows the
date and time of the actual planned shutdown.
• The Actions column shows a Cancel Planned Shutdown button. To cancel a planned server shutdown, click Cancel
Planned Shutdown.
All ClaimCenter screens, for all users logged into the affected cluster member, display a banner indicating the date
and time of the planned shutdown and your selected message.
Version Version information for each application installation. ClaimCenter provides the version information in the
following format:
Guidewire build.customer build (n,n,n,n,n)
These numbers have the following meaning:
• Guidewire build – Guidewire application version, for example, 9.0.0.
• Customer build– Custom label, defined in file customer-version.properties. If not defined, this field is
empty.
• (n,n,n,n,n) – Numbers that have the following meaning:
◦ Platform major version
◦ Platform minor version
◦ Application major version
◦ Application minor version
◦ Data model version
See “Understanding Guidewire Software Versioning” on page 395 for more information.
Status Status of each upgrade, for example, new schema or upgraded.
Type Type of each upgrade, for example, install or full.
Start Time Date and time of the beginning of the upgrade.
End Time Date and time of the completion of the upgrade.
Duration Length of time that the upgrade process took. Time is shown as both the total number of seconds involved and
the time for just the database upgrade portion of the upgrade.
Deferred Execution status of the Deferred Upgrade Tasks batch process. See “Deferred Upgrade Tasks Batch Processing”
Upgrade Tasks on page 115 for details.
Status
View Details Click the View icon to open a pop-up from which you can view the same reports contained in the UpgradeInfo.
zip file.
See “View an Upgrade Report” on page 395 for more information.
Download Click the Download arrow to download an UpgradeInfo.zip file that contains a set of HTML reports describing
Details various aspects of the upgrade process.
See “View an Upgrade Report” on page 395 for more information.
Remove Detail Click the trash can icon to remove this row of data.
Data
See also
• “Review Profiler Upgrade Information” on page 406
See also
• “Upgrade and Versions” on page 393
• “Understanding Data Model Updates” on page 260
Procedure
1. Navigate to the Server Tools Upgrade and Versions screen.
2. For the upgrade process that interests you, click one of the following:
Download Details arrow Downloads a Zip file that contains various types of upgrade information.
View Details icon Open a pop-up window from which you can view the upgrade reports.
3. If you downloaded the report file, unzip the file into its own directory.
4. Locate file index.html and double-click it to open it in a browser.
5. Use the links on the screen to navigate through the linked reports.
A.B (a,b,c,d,e)
Version # Meaning
A Guidewire application version plus the application release build version, for example, 9.0.0.905.
B Custom version label
a Platform major version
b Platform minor version
c Application major version
Version # Meaning
d Application minor version
e Data model version number
9.0.0.905.20161017 (6,22,11,45,175)
Notice that:
• 9.0.0.905 is the application version number (9.0.0) plus the application release build version number (905).
• 20161017 is a custom label that you define in file customer-version.properties. In this case, the label
indicates the date of a ClaimCenter configuration upgrade. If not defined, this field is empty.
• 175 is an upgrade version number that you define in file extensions.properties.
WARNING In a production environment, Guidewire requires that you increment the data model version
number whenever you make changes to the data model, before you restart the application server. Otherwise,
unpredictable results can occur. Use of the extensions.properties file in a development environment is
optional.
See also
• “File customer-version.properties” on page 396
• “Create a Custom Version Label File” on page 397
• “Deploy a Custom Version Label File on Tomcat” on page 397
• Configuration Guide
File customer-version.properties
Use file customer-version.properties to define a custom build version number. If you create this file and
populate it correctly, ClaimCenter appends your custom version number to the Guidewire software version label.
Most commonly, you use a custom build number to label and track a configuration deployment that you undertake as
a rolling upgrade.
File customer-version.properties contains the following single property:
customer.build
customer.build=20160914-PCF-upgrade
ClaimCenter does not contain file customer-version.properties in the base configuration. Instead, you must
create this file and place it in a location that ClaimCenter recognizes.
See also
• “Understanding Guidewire Software Versioning” on page 395
• “Create a Custom Version Label File” on page 397
• “Deploy a Custom Version Label File on Tomcat” on page 397
Procedure
1. In the Studio Project window, expand configuration→res.
2. Select res and right-click to open the context menu.
3. Select New→File.
4. Name the file customer-version.properties.
5. Enter a value for customer.build, for example:
customer.build=20160914-PCF-upgrade
Next steps
See also
• “Understanding Guidewire Software Versioning” on page 395
• “File customer-version.properties” on page 396
• “Deploy a Custom Version Label File on Tomcat” on page 397
Procedure
1. Create a Tomcat WAR file:
a. Open a command prompt in the ClaimCenter installation directory.
b. Execute the following command:
gwb warTomcatDBcp
ClaimCenter adds file customer-version.properties to the following JAR file in the generated WAR
file:
WEB-INF/lib/configuration.jar
Result
After starting the server, the Server Tools Upgrade and Versions screen shows the custom version label, appended to
the standard application version number.
Next steps
See also
• “Understanding Guidewire Software Versioning” on page 395
• “File customer-version.properties” on page 396
• “Create a Custom Version Label File” on page 397
Cache Info
The Server Tools Cache Info screen provides information in both table and chart form of ClaimCenter server cache
information. Guidewire recommends that you use this information to help you monitor how well the cache is
performing.
See also
•
• “Server Cache Tuning Parameters” on page 71
• Configuration Guide
Max Cache Space (KB) Maximum amount of space to allot to the global cache.
Stale Time (mins) Maximum time allowed for an object to be in the cache without a database refresh.
Page Loaded at Date and time of the last refresh of the data on this screen.
Edit Click Edit to enter different values for Maximum Cache Space and Stale Time. If you change these parameters from
the Cache Summary view, the values you specify apply only to the ClaimCenter server to which you are connected.
If you restart the server, your changes are lost. For your changes to persist, edit the their values in file config.x
ml.
Refresh Click Refresh to update the information shown on this screen. This action also updates the date and time values
shown for the Page Loaded at field.
Download Click Download to download a CSV-formatted file containing detailed cache information. See also “Understanding
the Cache Data Report” on page 399.
Clear Global Click Clear Global Cache to clear the cache of all entities. Note, however that the cache always contains some
Cache objects to support an active server.
Graph Description
Cache Size The memory used by the cache over time.
Hits and Misses (Stacked) The number of cache hits (an object was found in the cache) and misses (object was not found
in the cache) and the miss percentage.
Type of Cache Misses The number of cache misses caused by ClaimCenter evicting an object because the cache was
full and the number of missed caused by ClaimCenter evicting an object due to reaping.
Graph Description
This graph is not visible if configuration parameter GlobalCacheDetailedStats in config.xm
l is set to false.
Current Age Distribution The number of objects of various ages in the cache.
Current Cache Contents for Age All The percentage of types of objects present in the cache for all ages.
See also
• Configuration Guide
Graph Description
Space Retained Memory used by the cache over the past couple days. The time shown is a much
longer period than the cache size graph on the Cache Summary tab, which only displays
the past 15 minutes.
In this case, the x-axis represents the average values for each time period during each
of the past eight days. This is to allow for comparison of cache behaviors against
hourly trends.
Hits and Misses (stacked) Number of hits (object was found in the cache) and misses (object was not found in
the cache) and the miss percentage over the past day and past seven days.
Miss % The percentage of cache read attempts in which the object was not found in the
cache over the past day and the past seven days.
Number of Misses because item was evicted The number of misses over the past day and the past seven days due to ClaimCenter
when cache was full having evicted an object from the cache because the cache was full.
Graph Description
Age Distribution by time A number of graphs that show the age distribution of objects in the cache. The Cache Details tab
shows age distributions for zero to 30 minutes ago.
Current Cache Contents by age A number of graphs that show the percentage of types of objects in the cache over time.
In looking at the cache data provided in the downloaded report, Guidewire recommends that you first calculate the
cache miss ratio around the time of the performance degradation. The cache miss ratio is the ratio of (misses) /
(misses + hits).
The cache miss ratio is a useful metric in that it normalizes the cache values. For example, suppose that you have the
following hit and miss cache values that you use to calculate each individual cache miss ratio.
The calculation of the cache miss ratio enables you to compare the data in way that is not possible by simply
examining the raw data.
Guidewire Profiler
The Server Tools Guidewire Profiler screen provides access to a set of tools that are useful in gathering and analyzing
information on the runtime behavior and performance of Guidewire ClaimCenter. The Profiler records the time
spent in specific processing areas done by the application code, as well as the configured rules and PCF screens. Use
of this information can help narrow down issues to the potentially problematic components such as PCF screens,
rules, code sections, or workflows.
Profiler code is useful, for example, to record the following types of operations and information:
• SQL statements that ClaimCenter is executing
• Parameters passed to those SQL statements
• Row counts
• Name of the currently executing rule
Guidewire recommends that you exercise care in using this feature. Storing too much information can cause the
Profiler screen to become too cluttered, require more space for storage and, for long-running processes, hold on to too
much memory at runtime.
Note: Guidewire Profiler does not collect memory usage statistics. You can use a third-party tool to gather memory
usage and garbage collection information.
See also
• “Server Memory Management” on page 73
See also
• “Guidewire Profiler” on page 400
• “Guidewire Profiler Analysis” on page 404
Except for Web entry points, Guidewire Profiler stores configuration information in the database. This information
is visible to all ClaimCenter servers in the cluster. For a change in configuration to a batch process, work queue, or
message destination to take effect, you need to restart that batch process, work queue, or message destination. Any
change in configuration can take some time to propagate through the cluster. Also, it may take up to the cache stale
time for a configuration change to become visible.
The next time profiling starts for a given entry point, Guidewire Profiler checks whether profiling is enabled for that
entry point. If profiling is enabled, Guidewire Profiler records the profiling data in the form of a profiler stack. The
Profiler records multiple stacks if the initial thread spawns more threads and the developer profiles the spawned
threads. Except for Web profiling, ClaimCenter persists this data database, making it possible to retrieve the data
later.
See also
• “Web Session Profiling” on page 402
• “ClaimCenter Application Profiling” on page 403
• “Guidewire Profiler Analysis” on page 404
• “Ways to View a Guidewire Profiler Analysis Reports” on page 405
Procedure
1. Navigate to the Server Tools Guidewire Profiler screen.
2. Select the profiling options that you want to use for this profiling session.
3. Restart the associated batch process, work queue, or message destination, if appropriate.
4. Click Enable Web Profiling for this Session.
This action returns you to the application screen from which you started.
5. Exercise the application screens that you want to profile.
6. Press ALT+SHIFT+P to return to Guidewire Profiler.
7. Click Disable Web Profiling to end the current Web profiling session.
This action generates the Web Profiler screen.
Next steps
See also
• To understand the various web profile tracing options, see “Profiler Trace Options” on page 403.
• To understand the Web Profiler screen, see “Guidewire Profiler Analysis” on page 404.
• To download and save this snapshot of application profiling data, see “View Uploaded Profiler Reports” on page
406.
To enable a specific profiling option, select or check the box next it. The following list describes the types trace
options that are available in Guidewire Profiler.
See also
• “Profiler Entry Points” on page 401
• “Web Session Profiling” on page 402
• “ClaimCenter Application Profiling” on page 403
• “Guidewire Profiler Analysis” on page 404
See also
• “Guidewire Profiler” on page 400
• “View Uploaded Profiler Reports” on page 406
• Rules Guide
Procedure
1. Navigate to the Server Tools Guidewire Profiler→Profiler Analysis→Profiler Analysis screen.
2. Select the profiler entry point, for example, Batch Processes.
3. In the Profiler Source area at the top of the screen, select the report that interests you.
4. In the Profiler Result area, select a value from the View Type drop-down list.
5. Click Download.
6. In the dialog that opens, select Save File and click OK.
Next steps
See also
• “View Uploaded Profiler Reports” on page 406
See also
• “Guidewire Profiler” on page 400
Procedure
1. Navigate to the Server Tools Guidewire Profiler→Profiler Analysis→By Time Range screen.
2. Click Search.
The screen shows two calendar date pickers.
3. Enter a start and stop date in which to search for existing profiler analysis reports.
ClaimCenter prints the results of the search to the screen:
• If ClaimCenter finds no existing profiler reports that occurred within the specified time frame, it prints a
message to that effect.
• If ClaimCenter does find one or more profiler reports that occurred within the specified time frame, it lists
the reports.
4. In the Profiler Source area at the top of the screen, select the report that interests you.
5. In the Profiler Result area, select a value from the View Type drop-down list.
Procedure
1. Navigate to the Server Tools Guidewire Profiler→Profiler Analysis→Saved File screen.
2. In the Restore Snapshot field, browse to find a .gwprof file for upload.
3. Click OK.
Result
The saved profiler data loads in the Saved File screen. which then becomes the Profiler Analysis screen. If you navigate
away from this screen, Guidewire Profiler deletes the uploaded data from the screen.
Next steps
See also
• “Save a Profiler Analysis Report” on page 405
Procedure
1. Navigate to the Server Tools Guidewire Profiler→Profiler Analysis→Upgrade screen.
2. In the Profiler Source area at the top of the screen, select the report that interests you.
3. In the Profiler Result area, select a value from the View Type drop-down list.
Next steps
See also
• “Upgrade and Versions” on page 393
• “Guidewire Profiler” on page 400
• “Guidewire Profiler Analysis” on page 404
Profiler Tags
Profiler tags represents sections of code that Guidewire Profiler can profile. A profiler tag is an alias for a piece of
code in the Guidewire application for which you want to gather performance information.
The code represents profiler tags by instances of the gw.api.profiler.ProfilerTag class. The constructor for the
ProfilerTag takes a String parameter defining the ProfilerTag name.
Always create a static final ProfilerTag object and preserve it. If you attempt to create more than one instance of
the same ProfilerTag object, ClaimCenter generates a warning message in the application log that is similar to the
following:
Profiler Frames
A profiler frame contains information corresponding to a specific invocation of profiled code, such as its start and
finish times.
In each Profiler session:
• Whenever the code calls push on the profiler stack, Guidewire Profiler creates a profiler frame and pushes the
frame onto the stack.
• Whenever the code calls pop on the profiler stack, Guidewire Profiler removes the profiler frame from the stack.
The Profiler continues to stores the frame information, however, so as to make the information available for
future examination.
The code represents Profiler frames by instances of gw.api.profiler.ProfilerFrame.
See also
• “Understanding Properties and Counters on a Frame” on page 408
Profiler Stacks
A profiler stack stores profiling information for a specific thread. A profiler stack implements the standard push and
pop functionality of a stack. The push and pop actions correspond to the beginning and end, respectively, of a piece
of code represented by a profiler tag. Thus, at any time, the current contents of the profiler stack reflect all profiler
tags whose code ClaimCenter is currently executing. The code represents Profiler stacks by instances of
gw.api.profiler.ProfilerStack.
If a profiler stack has been initialized for the current thread, the call to Profiler.push(ProfilerTag.MYTAG)
pushes a new frame with tag MYTAG on to that profiler stack. Otherwise, the call has no effect.
Similarly, Profiler.pop(frame) is just a pass-through to calling pop on the profiler stack of the current thread.
package gw.profiler
uses gw.api.profiler.ProfilerTag
class MyProfilerTags {
public static final var MY_TEST_TAG1 = new ProfilerTag("MyTestProfiler1")
public static final var MY_TEST_TAG2 = new ProfilerTag("MyTestProfiler2")
public static final var MY_TEST_TAG3 = new ProfilerTag("MyTestProfiler3")
//..
private construct() {
// Do not instantiate}
}
}
To profile a block of custom code, use the following pattern to push and pop profiling information onto the profiler
stack.
uses gw.api.profile.Profiler
...
See also
• “Understanding Properties and Counters on a Frame” on page 408
gw.api.profiler.Profiler.createPotentiallyProfiledRunnable(ProfilerTag entryPointTag,
String entryPointDetail, GWRunnable block)
This generates a new Runnable object that executes the given block. This Runnable object profiles the block if the
calling thread is also being profiled. If this is the case:
• The Profiler associates the stack for that thread with the stack of the calling thread.
• The Profiler persists that thread along with the stack of the calling thread.
See the Javadoc for the Profiler.createPotentiallyProfiledRunnable method for more details.
uses gw.api.profiler.Profiler
uses gw.api.profiler.ProfilerTag
try {
frame.setPropertyValue("PARAMETER", param)
frame.setCounterValue("COUNTER", ctr)
} finally {
Profiler.pop(frame)
After the sample code pops a profiler frame off the stack, the frame contains information about the calculated values
of PARAMETER and COUNTER. The Server Tools Profiler Analysis screen then shows these values as well.
See also
• “Guidewire Profiler” on page 400
• “Guidewire Profiler Analysis” on page 404
• “Using Custom Profile Tags with Guidewire Profiler” on page 408
See also
• “Guidewire Profiler” on page 400
• Gosu Reference Guide
JProfiler
JProfiler is a third-party tool available from ej-technologies. It is a Java profiler for use with CPU, memory, and
thread profiling. Consult the ej-technologies product documentation for details on how to use JProfiler.
Metro Reports
The Server Tools Metro Reports page tracks the Metropolitan Reporting Bureau reports that ClaimCenter is
processing. This is a cumulative reporting showing all the reports and their current status. You can filter the list of
reports by date, status, and step. You can also use this page to resume the processing of reports.
Internal Tools
Guidewire provides internal tools to assist you with certain administrative tasks.
WARNING Guidewire does not support the use the tools found in the Internal Tools screens. Guidewire provides
these tools for use during development only. Guidewire does not support the use of these tools in a production
environment. Use these tools at your own risk.
See also
• “Server Tools” on page 349
Reload
The Reload screen is useful while you develop a configuration. From this screen you can reload key configuration
files into a running ClaimCenter installation. You can choose from the following options:
Option Description
Reload PCF Files Verifies and reloads all PCF files. If there are errors in the PCF files, ClaimCenter writes the errors to the
log.
Verify All PCF Files Verifies the PCF files without reloading them.
Reload Web Templates Reloads the entire ClaimCenter user interface including the config/web/templates directory.
Option Description
Reload Workflow Engine Reloads the Workflow engine.
Reload Display Names Reloads label definitions only from the display_languageCode.properties file for the locale.
CC Sample Data
The CC Sample Data screen is for loading sample data into ClaimCenter for development purposes only. Guidewire
does not support this tool for a production environment.
See also
• Installation Guide
ClaimCenter includes a number of administrative tools as command prompt tools that you can use for help with
administrative tasks on your ClaimCenter server.
Note: For tools that build ClaimCenter, see the Installation Guide.
Tool Description
“Data Change Command” on page Provides a mechanism for making changes to code on a running production server.
415
WARNING Only use the data_change command under extraordinary conditions,
with great caution, and upon advice of Guidewire Support. Before registering a
data change on a production server, register and run the data change on a
development server. Guidewire recommends multiple people review and test the
code and the results before attempting the data change on a production server.
“FNOL Mapper Command” on page Integration tool that imports FNOL reports, which are initial claim reports, from a
416 standard XML-based file format called ACORD XML. See the Integration Guide for details.
“Import Tools Command” on page Set of utilities for loading XML-formatted data into ClaimCenter.
417
Tool Description
“Maintenance Tools Command” on Set of utilities for performing maintenance operations on the server (for example,
page 418 running escalation/exception rules, calculating statistics, and more.)
“Messaging Tools Command” on Provides a set of utilities for managing integration event messages (for example, retrying
page 420 a message, skipping a message, purging the message table, and more).
“System Tools Command” on page Provides a set of utilities for controlling the server (for example, pinging the server,
422 bringing the server in and out of maintenance mode, updating database statistics, and
more.)
“Table Import Command” on page Used for importing tables into the database.
430
“Template Tools Command” on Helps in converting between template versions.
page 432
“Workflow Tools Command” on Allows you to manage user workflows in the system.
page 434
“Zone Import Command” on page Loads zone data from a file to a staging table.
434
import_tools -help
tool_name Bold font indicates that this is the actual command name, for example, import_tools.
-option All command options start with a minus sign (-). Command options are either mandatory or optional. See the
following discussion.
| An upright bar indicates a Boolean OR. For example, A | B | C means A or B or C.
{ ... } A set of curly braces indicates a set of mutually exclusive choices. You must one chose (and only one) item from a
set of choices. For example, { A | B | C } indicates you must choose either A or B or C, but not more than of
one the listed options.
arguments Specifies the arguments required by a tool option such as a file name or directory, for example, import_tools ...
-import file.
... A series of dots after the argument indicates that you can enter multiple items of the same type. For example, -i
mport file ... indicates that you can enter multiple file names (file) after the -import argument.
[ ... ] A set of square brackets indicates that the argument is optional. For example, [-user] indicates that the
command permits you to set a user value (-user), but does not require that you set this value.
In contrast, an argument not enclosed in square brackets indicates that an argument is mandatory. For example,
for all the administrative commands, the -password argument is mandatory. Thus, the command syntax does not
surround the -password argument by square brackets as the argument is mandatory.
WARNING Only use the Production Data Fix tool under extraordinary conditions, with great caution, and upon
advice of Guidewire Support. Before registering a data change on a production server, register and run the data
change on a development server. Guidewire recommends multiple people review and test the code and the
results before attempting the data change on a production server.
data_change -help
data_change -password password [-server url] [-user user] {
-edit refID -gosu filepath [-description desc] |
-discard refID |
-status refID |
-result refID }
See also
• For a description of how and when to use the data_change command to change data on a running production
server, see “Production Data Fix Tool” on page 319.
• For a description of how to use the DataChangeAPI web service, see “Data Change Web Service Reference” on
page 325.
WARNING Only use the Production Data Fix tool under extraordinary conditions, with great caution, and upon
advice of Guidewire Support. Before registering a data change on a production server, register and run the data
change on a development server. Guidewire recommends multiple people review and test the code and the
results before attempting the data change on a production server.
Option Description
-description desc Human-readable description (desc) of the change. Include this option with the edit option. For
testing, the description is optional. For production use, include the description. Put quotes around
the description to permit space characters in the description.
-discard refID Instruction to discard a data change that you already registered. You must supply a data change
reference ID (refID). You cannot discard a data change that was already run.
-edit refID Instruction to create a new data change or edit an existing data change. You must supply a unique
reference ID (refID) for this data change.
If the data change succeeded with no compile errors, you cannot edit it. You must re-register the
script with a new reference ID.
If the data change was never run, or had compile errors, you can update (edit) the Gosu code with
the same reference ID.
Option Description
If you use the edit option, you must:
• Include the -gosu option to include your Gosu data change code
• Include the -description argument to provide a description
-gosu filepath Full path name (filepath) to a Gosu script. You must include this option with the edit argument.
You can use a full path name, or a relative path that is relative to the current working directory.
-password password Password (password) to use to connect to the server. ClaimCenter requires the password.
-result refID Result of a data change that you already registered. You must supply a data change reference ID (re
fID). If a user attempted to run it and there were parse errors, the results include the errors.
-server url Specifies the ClaimCenter host server URL. Include the port number and web application name, for
example:
http://servername:8080/cc
-status refID Status of a data change that you already registered. You must supply a data change reference ID (re
fID). This option prints the status of the data change, which is one of the following:
• Open
• Discarded
• Executing
• Failed
• Completed
-user user User (user) to use to run this process.
The fnol_mapper command accepts first notice of loss (FNOL) claim data in an XML file. A mapping class file
contains a series of directives to transform the incoming FNOL data into a ClaimCenter Claim object. Then,
ClaimCenter imports the object.
The fnol_mapper command calls the ClaimAPI.importClaimFromXML method. See the Gosu documentation for
details. Generate Gosu documentation by opening in a command prompt in the ClaimCenter installation directory
and running the command gwb gosudoc.
See also
• Integration Guide
• Gosu Reference Guide
Option Description
-input filename Name of XML input file. This command option is mandatory.
-mapper classname Name of XML mapper class that contains directives to transform FNOL data into a Claim object. By
default, fnol_mapper knows how to map from ACORD format.
-password password Password (password) to use to connect to the server. ClaimCenter requires the password.
Option Description
-resultfile filename Name of the file to dump XML output. By default, fnol_mapper outputs to stdout.
-server url Specifies the ClaimCenter host server URL. Include the port number and web application name, for
example:
http://serverName:8080/cc
-user user User (user) to use to run this process. This command option is mandatory.
The import_tools command imports new or updated data into existing tables in the ClaimCenter database. You can
only import data for valid entities or their subtypes. ClaimCenter supports this command for importing
administrative data but not for importing other data into ClaimCenter. Instead, use staging tables or APIs other than
the ImportToolsAPI to import non-administrative types of data into ClaimCenter.
Note: ClaimCenter does not fire any events related to the data you add or modify through this command.
ClaimCenter does not throw concurrent data change exceptions if the imported records overwrite existing records
in the database.
Data that you import into ClaimCenter through the use of import_tools is immediately available. You do not need
to restart the ClaimCenter server for the changes to take effect.
IMPORTANT Guidewire supports using the import_tools command to import administrative data only.
IMPORTANT The MaximumFileUploadSize parameter in config.xml must exceed the size of any file that you
attempt to import. The MaximumFileUploadSize parameter value is in megabytes (MB). The base configuration
default value of MaximumFileUploadSize is 20 MB.
See also
• “Ways to Import Administrative Data” on page 291
• “About the import Directory” on page 292
• “Using Tools to Import Administrative Data” on page 302
• Integration Guide
Option Description
-charset charset Character set encoding (charset) for the files to import.If this option is null, ClaimCenter
sets the default character encoding to UTF-8. See also “Character Set Encoding for File
Import” on page 294.
-dataset integer Integer value (integer) representing the dataset to import from a CSV-formatted file, for
example:
RolePrivilege,0,default_data:2,abview,adjuster
Option Description
ClaimCenter orders datasets by inclusion. The number of the smallest dataset is always 0.
Thus, dataset 0 is a subset of dataset 1, and dataset 1 is a subset of dataset 2, and so forth.
To import all data, set this value to -1.
-ignore_all_errors Causes the tool to ignore any errors in a CSV-formatted input file.
-ignore_null_violations Causes the tool to ignore violations of null constraints in a CSV-formatted input file.
-import filename1, filename Imports administrative data from one or more CSV (comma-separated values data) files or
2, ... XML files.
It is possible to provide a list of file names in a separate file. To do so, create a file that
contains a comma-separated list of files names. Prefix an @ character to the name of the
list file, for example:
-import @files.lst
To convert data using the -output_csv or -output_xml options, provide only a single file
name.
-output_csv filename If used with the -import option, outputs comma-separated values to the specified file and
then stops processing. ClaimCenter imports no data into the server. Use this option to
convert XML input files to CSV-formatted output files.
-output_xml filename If used with the -import option, outputs XML to the specified file and then stops
processing. ClaimCenter imports no data into the server. Use this option to convert CSV
input files to XML-formatted output files.
-password password Password to use to connect to the server. ClaimCenter requires the password value.
-privileges Adds the role privileges contained in file roleprivileges.csv in the Studio modules/conf
iguration/config/import/gen folder to those roles that already exist in the database.
See also “About Adding Admin Data after Initial Server Startup” on page 294.
-server url Specifies the ClaimCenter host server URL. Include the port number and web application
name, for example:
http://servername:8080/cc
The maintenance_tools command starts, terminates, or retrieves the status of a ClaimCenter process. For a list of
processes that the maintenance_tools command can start, see “The Work Queue Scheduler” on page 93.
Option Description
-args arg1 arg2 ... Arguments to use while starting a process. Use only with -startprocess.
If you have multiple arguments, separate each one with a space. The
command does not validate the provided arguments.
To use arguments with custom batch processes, see the Integration Guide,
especially the following method:
ProcessesPlugin.createBatchProcess(type, args)
-changesubtypepublicid ID Public ID of the contact whose subtype you want to change. You must also
set the following option if using this option:
-changesubtypetargettype
-changesubtypetargettype type Target type to which to change the contact. You must also set the following
option if using this option:
-changesubtypepublicid
-claims n1, n2, n3, ... Comma-separated list of one or more claim numbers (n1, ...). Several
other options use -claims to select the claims on which to operate.
It is not possible to combine this option with either the -file or -policie
s options.
-forceall If true:
• Instructs the -rebuildagglimits command option to mark all
aggregate limits in the database as invalid.
• Starts the AggLimitCalc work queue to rebuild the aggregate limits
asynchronously.
-markforpurge -claims n1, n2, n3, ... Marks the claims identified by claim number (n1, ...) for purge. See also
“Understanding Claim Purging” on page 273.
-markforpurge -file Marks the claims identified in -file for purge. See also “Understanding
Claim Purging” on page 273.
-password password Password (password) to use to connect to the server. ClaimCenter requires
the password.
-policies n1, n2, n3, ... Comma-separated list of one or more policy numbers (n1, ...). The rebu
ildagglimits command option uses -policies to select the policies on
which to operate.
It is not possible to combine this option with either the -claims or -file
options.
-processstatus process Returns the status of a batch process. For the process value, specify a valid
process name or a process ID.
For work queues, this option returns the status of the writer process. It
does not check whether additional work items remain in the work queue.
Thus, it is possible for the process status to report completion after the
writer finishes adding items to the work queue while the work queue
contains unprocessed work items.
-purgefromaggregatgelimit Use this option with the -markforpurge option to determines whether to
purge claims that are part of an aggregate limit. A value of true means
purge the claims. A value of false means do not purge the claims.
Option Description
-rebuildagglimits Rebuilds the aggregate limits on policy periods for policies identified by -p
olicies, or the policy periods that contain claims identified by -claims or
-file.
Guidewire recommends that you increase the number of workers used for
this process. See “Aggregate Limit Calculations Batch Processing” on page
104 for details.
-restore -claims claimnumber Restores one or a group of claims from the archive, with the supplied
comment.
To restore a single claim, type:
maintenance_tools -restore comment -claims claimnumber -use
r user -password password
To restore a group of claims, use the same options, but replace claimnumbe
r with a file name. This ASCII file must contain a list of claim numbers,
separated by new lines.
-restore -file filename Restores a group of claims from the archive, with the supplied comment.
This option uses the following format:
maintenance_tools -restore comment -file filename -user use
r -password password
The ASCII file specified by filename must contain a list of claim numbers,
separated by new lines.
-scheduleforarchive -claims claimNumber1, Flags an individual claim or comma-delimited list of claims for archiving. A
claimNumber2... separate process later archives claims flagged by this process. See the
Integration Guide.
-scheduleforarchive -file filename Flags multiple claims for archiving. The file contains a list of claim numbers
to schedule for archiving. A separate process later archives claims flagged
by this process. See the Integration Guide.
-server url Specifies the ClaimCenter host server URL. Include the port number and
web application name, for example:
http://serverName:8080/cc
-startprocess process -args... Starts a new batch process. For the process value, specify a valid process
code. See also -args.
For a list of batch process codes, including work queue writer processes,
see “Work Queues and Batch Processes, a Reference” on page 103.
-terminateprocess process Terminates a batch process. For the process value, specify a valid process
name or a process ID.
It is not possible to terminate single phase processes using this option.
Single phase processes run in a single transaction. Thus, there is no
convenient place to terminate the process. See “Work Queues and Batch
Processes, a Reference” on page 103 to determine if it is possible to
terminate a process.
-user user User (user) to use to run this process.
-whenstats Reports the last time ClaimCenter calculated statistics on the server.
You use the messaging_tools command to manage a message destination from the command prompt. To do so,
you must know the message destination ID. The person who creates the message destination assigns this ID. You
create and configure message environments and destinations in file messaging-config.xml. Access messaging-
config.xml in Guidewire Studio at the following location:
configuration→config→Messaging
Option Description
-claim claimID Use to specify the claim ID (claimID) of the claim to re-synchronize. See. -resync.
-config -destination destinationID Returns the configuration for a message destination.
-destination destinationID Specifies a message destination (destinationID).
-password password Password (password) to use to connect to the server. ClaimCenter requires the
password.
-purge date Deletes completed messages that are older than a specified date. The purge tool
deletes messages in Acked, ErrorCleared, Skipped or ErrorRetried state with
send time before the specified date. The date format is mm/dd/YYYY.
If the purge tool succeeds in removing these messages without error, it reports Me
ssage table purged.
Since the number and size of messages can be very large, periodically use this
command option to purge old messages to avoid the database from growing
unnecessarily.
-restart -destination destinationID Restarts the messaging destination with new configuration settings:
-wait wait • destinationID – The destination ID of the destination to restart.
-retries retries • wait – the number of seconds to wait for the shutdown before forcing it.
-initial initial • retries – The number of automatic retries to attempt before suspending the
-backoff backoff messaging destination.
-poll poll • initial – The amount of time in milliseconds after a retryable error to retry
sending a message.
-threads threads
• backoff – The amount to increase the time between retries, specified as a
-chunk chunk
multiplier of the time previously attempted. For example, if the last retry time
attempted was 5 minutes, and backoff is set to 2, ClaimCenter attempts the
next retry in 10 minutes.
• poll – Each messaging destination pulls messages from the database (from the
send queue) in batches of messages on the batch server. The application does
not query again until pollinterval amount of time passes. After the current
round of sending, the messaging destination sleeps for the reminder of the poll
interval. If the current round of sending takes longer than the poll interval,
then the thread does not sleep at all and continues to the next round of
querying and sending. See the Integration Guide for details on how the polling
interval works. If your performance issues primarily relate to many messages
for each primary object for each destination, then the polling interval is the
most important messaging performance setting.
Messaging Tools Command 421
System Administration Guide 9.0.5
Option Description
• threads – To send messages associated with a primary object, ClaimCenter can
create multiple sender threads for each messaging destination to distribute the
workload. These are threads that actually call the messaging plugins to send
the messages. Use the -threads option to configure the number of sender
threads for safe-ordered messages. ClaimCenter ignores this setting for non-
safe-ordered messages, as ClaimCenter uses one thread for each destination
for these types of messages. If your performance issues primarily relate to
many messages but few messages per claim for each destination, then this is
the most important messaging performance setting. For more information, see
the Integration Guide.
• chunk – number of messages to read in a chunk.
-resume -destination destinationID Resumes the operation of the specified message destination.
-resync -destination destinationID Resynchronizes a claim with specified ID against a specific message destination.
-claim claimID Use -destination and -claim to specify the destination and claim.
-retry messageID Attempts to resend a message that failed. The message must be a candidate for
retrying. A message is a candidate for retry if the error at the destination system is
temporary and the message destination does not have an automatic retry
mechanism. For instance, the message is a candidate for retry if the destination
contains a locked record and refuses the update.
-retrydest destinationID Retries all retryable messages for a message destination.
-server url Specifies the ClaimCenter host server URL. Include the port number and web
application name, for example:
http://serverName:8080/cc
-skip messageID Skips a message with the specified ID. If you mark a message as skipped, then
ClaimCenter stops trying to resend the message. After you skip a message, you can
not retry it.
-statistics destinationID Prints the statistics for the specified destination.
-suspend destinationID Suspends a message destination. Use this command option if the destination
system is going to be shut down or to halt sending while ClaimCenter processes a
daily batch file.
-user user User (user) to use to run this process.
-oraListSnaps numstats |
-oraPerfReport beginSnapshotID endSnapshotID probeVDollarTables |
-ping |
-recalcchecksums |
-reloadloggingconfig |
-requestcomponenttransfer type componentId targetOwner |
-scheduleshutdown serverId [-terminatebatchprocesses | -shutdowndelay minutes] |
-sessioninfo |
-startfullupgrade |
-updatelogginglevel loggername logginglevel |
-updatestatistics description update |
-verifyconfig filepath |
-verifydbschema |
-version }
See also
• “Managing Database Statistics using System Tools” on page 282
• “Database Statistics Batch Processing” on page 114
• Integration Guide
Option Description
-cancelshutdown serverId Cancels the planned shutdown of the server specified by serv
erId.
Option Description
Specify the checkTypeSelection argument as one of the
following:
• all – Run all consistency checks on the specified tables.
• checkName – The typecode of a single consistency check to
run.
• @fileName – The name of a file with one or more valid
consistency check names entered in comma-separated
values (CSV) format.
If you specify one optional argument, you must specify both.
To run consistency checks from ClaimCenter, use the Server
Tools Consistency Checks→Info Page screen, described in
“Consistency Checks” on page 361.
For more information, see “Database Consistency Checks” on
page 262.
-completefailedfailover type componentId Manually completes component failover for a failed
component. You must supply the component type (type) and
component ID (componentId).
-components Provides information about the components that exist on
each ClaimCenter server in the cluster. The report contains the
following information for each component:
• Component type
• Component code
• Component state
• Component start date and time
• Server ID of the server instance on which the component
exists
• Component ID
The report information is similar, but not identical, to the
cluster information available from the Server Tools Cluster
Members and Cluster Components screens. See “Cluster Members
and Components” on page 386 for information on these
screens.
-daemons Sets the server to the daemons run level. For information
about the various run levels, see “Server Run Levels” on page
59.
-dbcatstats Used with no arguments, the option returns a ZIP file of
database catalog statistics info for all the tables in the
database.
The -dbcatstats option takes the following optional
arguments:
• regularTables
• stagingTables
• typelistTables
This option, used with three arguments, returns a ZIP file of
database catalog statistics info for the specified tables.
Specify each of the arguments as one of the following:
• all/none – Select all/none tables of this type, or
• <tableName> – The name of a single table of this type, or
• @<fileName> – The name of a file with one or more valid
table names of this type entered in comma-separated
values (CSV) format.
Option Description
For example, -dbcatstats none none all returns database
catalog statistics information for all the typelist tables. You
must specify either no arguments or all three arguments if
you use this command option.
You can specify the target destination for the database catalog
statistics ZIP file by adding the –filepath filepath option. If
you do not provide a path, ClaimCenter uses the current
directory.
This process can take a long time, and it is possible for the
connection to time out. If the connection times out while
running this command option, try reducing the number of
tables on which to gather statistics by using the arguments
listed previously.
For information about configuring database statistics
generation, see “Understanding Database Statistics” on page
279.
-evenifincluster [-filepath filepath] Consider the cluster member as failed even if it is still in the
cluster. Use only with the -nodefailed option.
The -filepath parameter sets the location (filepath) for an
optional report.
IMPORTANT This command option overrides an important
safety check on the server. Use this command option in
certain defined circumstances only. See -nodefailed for
details.
-getdbccstate Returns the status of any currently executing database
consistency checks, for example, Processing or Completed.
-getdbstatisticsstatements Retrieves the list of SQL statements to update database
statistics and prints the list to the console. See
“Understanding Database Statistics” on page 279.
-getincrementaldbstatisticsstatements Retrieves the list of SQL statements to update database
statistics for tables exceeding the change threshold. Prints the
list to the console.
The incrementalupdatethresholdpercent attribute of the <
databasestatistics> element in database-config.xml
defines the change threshold. See “Understanding Database
Statistics” on page 279.
-getPerfReport ID Downloads the performance report with the specified ID. You
can retrieve a list of available performance report IDs by
running the -listPerfReports command option.
-getupdatestatsstate Returns the state of the process running the statistics update.
-listPerfReports number Lists IDs and other information for available database
performance reports. You can specify an optional integer (num
ber) to specify the number of available downloads to list,
ordered starting with the most recent. If unspecified or 0, this
command lists all available downloads.
The list shows the ID of the report and the status, indicating if
the performance report batch job succeeded, failed, or is still
running. The list also includes the start and end times of the
batch job and the description of the batch run.
You can use the ID of the performance report to download the
report with the -getPerfReport ID option.
-loggercats Displays the available logging categories.
Option Description
-maintenance Sets the server to the maintenance run level. For information
about the various run levels, see “Server Run Levels” on page
59.
-mssqlPerfRpt numTopQueries numHotObjects Generates a SQL Server DMV (Dynamic Management Views)
collectStatistics performance report using the MSDMReport batch process. This
command option has the following arguments:
• numTopQueries
• numHotObjects
• collectStatistics
Replace numTopQueries and numHotObjects with integer
values for the number of top queries and hot objects to
report.
Replace collectStatistics with true or false to specify
whether ClaimCenter gathers database statistics while
generating the DMV report.
You must specify all three arguments or none. If you do not
specify any arguments, ClaimCenter uses defaults of 400 top
queries, 400 hot objects, and does collect statistics.
-multiuser Sets the server to the multiuser run level. For information
about the various run levels, see “Server Run Levels” on page
59.
-nodefailed serverId Releases any tasks owned by the ClaimCenter server specified
by serverId. Only use this command option if the server
referenced by serverId has already been stopped or
otherwise shutdown.
See also the -evenifincluster option.
IMPORTANT You must ensure that the server referenced by se
rverId is actually stopped if using the -evenifincluster
option. ClaimCenter does not prevent you from using this
option if the server is still running. However, this option
overrides an important safety check on the server. It can
produce unexpected and possibly negative results if the
server is running.
Use the -evenifincluster option only if both of the
following are true:
• The server in question is no longer running.
• The standard operation of the -nodefailed command
failed due to the server retaining its cluster membership.
-nodes Provides information about each ClaimCenter server in the
cluster. The report contains the following information on each
cluster member:
• ID of this cluster server
• Whether the server instance is actively in the cluster
• Server run level
• Time the server instance started
• Time at which ClaimCenter last updated the server
instance
• Number of user sessions active on the server instance
• Whether a planned shutdown is in progress
• Time of the planned shutdown
• Whether background tasks are still active on the server
Option Description
The report information is similar, but not identical, to the
cluster information available from the Server Tools Cluster
Members screen. See “Cluster Members and Components” on
page 386 for information on that screen.
-oraListSnaps numSnaps Lists numSnaps number of available Oracle AWR snapshot IDs,
starting with the most recent snapshot. You can generate
performance reports using the -oraPerfReport option with
these available beginning and ending snapshot IDs.
-oraPerfReport beginSnapshotID endSnapshotID Generates a Guidewire AWR performance report using the Or
probeVDollarTables aAWRReport batch process. This command option has the
following arguments:
• beginSnapshotID
• endSnapshotID
• probeVDollarTables
Specify the beginning and ending snapshot IDs and whether
to probe VDollar tables. The two snapshots must share the
same Oracle instance startup time.
The third argument can also specify a file by prefixing the file
name with an @ sign, for example, @filename.properties.
Optionally, you can prefix the file name with the path to the
file, if the file is not in the current directory. This file is a
standard properties file with the following property names
(default value in parenthesis):
• probleVDollarTables (false)
• capturePeekedBindVariables (false)
• searchQueriesMultipleHistoricPlans (false)
• searchQueriesBeginSnapOnly (true)
• searchQueriesEndSnapOnly (true)
• includeInstrumentationMetadata (false)
• outputRawData (false)
• includeDatabaseStatistics (true)
• probeSqlMonitor (true)
• capturePeakedBindVariablesFromAWR (false)
• genCallsToAshScripts (false)
You must spell and capitalize each property as shown or
ClaimCenter ignores the property. If you specify a property,
you must set value of that property to either true or false. If
you do not specify a property, ClaimCenter uses the default
value for that property.
The -oraPerfReport option reports the process ID of the
process generating the performance report. You can check on
the status of this process using the following command:
maintenance_tools -processstatus processID
View the performance report on the Info Page. See “Oracle
AWR” on page 377.
-password password Password (password) to use to connect to the server.
ClaimCenter requires the password.
Option Description
-ping Pings the server to check if its active. The returned message
indicates the server run level. The possible responses are:
• MULTIUSER
• DAEMONS
• MAINTENANCE
• STARTING
For information about functionality available at various run
levels, see “Server Modes” on page 57.
-recalcchecksums Recalculates file checksums that ClaimCenter uses for
clustered configuration verification.
-reloadloggingconfig Directs the server to reload the logging configuration file.
-requestcomponenttransfer type componentId Requests transfer of ownership of a component of the
targetOwner specified type (type) and ID (componentId) to the specified
ClaimCenter server (targetOwner).
Use the -components command option to determine the
component information to enter.
The -requestcomponenttransfer command option fails if the
component cannot be successfully stopped or the current
owning server is unable to process the request.
-scheduleshutdown serverId Schedules the planned shutdown of the server specified by se
[-terminatebatchprocesses rverId. Use with the following optional options:
-server url Specifies the ClaimCenter host server URL. Include the port
number and web application name, for example:
http://serverName:8080/cc
Option Description
-updatelogginglevel logger level Sets the logging level of logger with the given name. For the
root logger, specify RootLogger for the logger name.
-updatestatistics description update Launches the Update Statistics batch process to update
database statistics.
It is possible to specify an optional text description (descript
ion) for this batch process execution. ClaimCenter shows the
text of the description on the Execution History tab of the
Database Statistics info page.
Specify one of the following values for update:
• true – Update database statistics for tables exceeding the
change threshold only. Guidewire defines this change
threshold through the incrementalupdatethresholdperc
ent attribute of the <databasestatistics> element in da
tabase-config.xml.
• false – Generate full database statistics.
IMPORTANT Updating database statistics can take a long time
on a large database. Only collect statistics if there are
significant changes to data, such as after a major upgrade,
after using the table_import -integritycheckandload or z
one_import commands, or if there are performance issues.
See also
• “Understanding Database Statistics” on page 279
• “Database Statistics Batch Processing” on page 114
• “Database Statistics” on page 373
-user user User (user) to use to run this process.
-verifyconfig filepath Compares the following two server configurations:
1. The new or target server configuration contained in a
WAR/EAR file pointed to by the filepath parameter.
2. The existing server configuration of the cluster member
on which you run the system_tools command.
The tool provides an on-screen report that contains
information about the feasibility of a configuration upgrade
for the server instances in the cluster. For example, the tool
provides the following types of information:
• Configurations are different – Requires a full ClaimCenter
server upgrade.
• Configurations are identical – No upgrade is necessary.
• Configurations are compatible – Guidewire permits a
rolling upgrade.
If a rolling update is not possible, the command lists the
incompatible or missing files.
If a rolling update is in progress, there are two possible
configurations active in the cluster. Each individual server
instance is using either the source configuration or the target
configuration.
The -verifyconfig command option checks for both
configurations on the server instances on which you run the
command and reports which of the configurations is active on
this cluster member. If neither configuration is active, the
command reports that a rolling update is in progress and that
it is not possible to verify the configuration at this time.
Option Description
-verifydbschema Verifies that the data model matches the underlying physical
database.
-version Returns the running server version, the database schema
version, and configuration version.
The table_import command loads data from staging tables into ClaimCenter. Most of the options for this
command require the server to be at the MAINTENANCE run level. Before you use those command options, use the
system_tools -maintenance command option to set the server run level to MAINTENANCE. Use the system_tools
-multiuser command option to set the server run level to MULTIUSER after the table import command completes.
It is not possible to use the system_tools -terminateprocess command option to terminate the table_import
command.
See also
• “Load History” on page 382
• “System Tools Command” on page 422
• “The loader Database Configuration Element” on page 231
• Integration Guide
Option Description
-allreferencesallowed Allows references to existing non-administrative rows in all operational
tables. If there are rows in staging tables for CheckGroup or CheckPortion,
you must use this command option.
This option only applies with the following command options:
-integritycheck
-integritycheckandload
This option corresponds to the Boolean parameter allowRefsToExistingN
onAdminRows that the integrity check methods of the TableImportAPI web
service use. Guidewire recommends that you use this option or the
equivalent API parameter, set to true only if absolutely necessary. For
example, use it in the rare case that a policy period overlap exists between
the existing operational data and the data to load.
Option Description
This option can cause performance degradation during the check and load
process, which would noticeably slow down the loading of staging table.
See also “The Load History Detail Report” on page 383.
-batch Runs the table_import command in a batch process. This option only
applies with the following command options:
-deleteexcluded
-encryptstagingtbls
-integritycheck
-integritycheckandload
-populateexclusion
-updatedatabasestatistics
You can run table_import in a batch process from any node in a
ClaimCenter cluster. However, table import batch processing must run
physically on a server designated as a batch server. Therefore, in running
the command, provide the URL of a batch server and also provide the user
credentials for that batch server.
-clearerror Clears the error table.
See also “The Load History Detail Report” on page 383.
-clearexclusion Clears the exclusion table.
-clearstaging Clears the staging tables. Requires the server to be at the MAINTENANCE run
level.
-deleteexcluded Deletes rows from staging tables based on contents of exclusion table.
-encryptstagingtbls Instructs ClaimCenter to use the current encryption plugin implementation
to encrypt those columns in the staging table marked for encryption. See
also the Integration Guide.
-estimateorastats Executes queries for row counts on production tables and sets the
database statistics. If you do not use this option, the import command uses
information in database statistics to report approximate row counts. Use
the -estimateorastats option only to load production tables that are
empty or have very few rows. Used with the -integritycheckandload
command option.
This command option applies only to Oracle databases.
-filepath filepath Path to target directory in which to download a report. Use with the -getL
oadHistoryReport command option.
-getLoadHistoryReport reportID Downloads a compressed Zip version of the load history report as specified
by the value of reportID. Does not require the server to be at the MAINTEN
ANCE run level. Use the -listLoadHistoryReports option to determine
the ID to use. Use the optional -filepath parameter to specify the target
directory for the download.
-integritycheck Validates the contents of the staging tables. You can optionally specify:
-allreferencesallowed
-clearerror
-numthreadsintegritychecking
-populateexclusion
-integritycheckandload Validates the contents of the staging tables and populate operational
tables. You can optionally specify one of the following command options as
well:
-allreferencesallowed
-clearerror
Option Description
-estimateorastats
-numthreadsintegritychecking
-populateexclusion
-zonedataonly
-listLoadHistoryReports [numReports] Lists the most recent load history reports. Optional parameter numReports
is the number of reports to list:
• If you supply a positive integer for numReports, then ClaimCenter lists
that number of most recent reports.
• If you do not supply a value for numReports, then ClaimCenter lists all
available reports.
Does not require the server to be at the MAINTENANCE run level.
-messagesinks sinks, ... Deprecated. This option does nothing.
-numthreadsintegritychecking num Specifies the number of threads that ClaimCenter is to use in running
database table integrity checks. The value of num has the following
meaning:
• Not specified – ClaimCenter assumes the number of threads to be one,
no multithreading.
• 1 – No multithreading, the default.
• 2 - 100 – ClaimCenter runs the database integrity checks with the
number of specified threads.
• > 100 – ClaimCenter throws an exception.
This option only applies with the following command options:
-integritycheck
-integritycheckandload
Note: This value overrides any value set for attribute num-threads-inte
grity-checking on the database <loader> element in database-config
.xml. See “The loader Database Configuration Element” on page 231.
-password password Password (password) to use to connect to the server. ClaimCenter requires
the password.
-populateexclusion Populate the exclusion table with rows to exclude.
See “The Load History Detail Report” on page 383.
-server url Specifies a ClaimCenter server URL. Include the port number and web
application name, for example:
http://serverName:8080/cc
If running the table import command in a batch process, see -batch for
more information.
-updatedatabasestatistics Updates the database statistics on the staging tables. Run the table import
command with this option after populating the staging tables, but before
using the -integritycheck or -integritycheckandload options.
See “The Load History Detail Report” on page 383.
-user user Specifies the user to use to run this process.
-zonedataonly Sets the import to load zone data only. Used with the -integritycheckan
dload command option.
-convert_dir directory |
-convert_file filename [working_dir directory] |
-import_dir objectsfile fieldsfile directory [working_dir directory] |
-import_files objectsfile fieldsfile outfile |
-list_all_templates |
-list_doc_template |
-list_email_templates |
-list_note_templates |
-validate_all_doc_templates |
-validate_all_email_templates |
-validate_all_note_templates |
-validate_all_templates |
-validate_doc_templates templateID |
-validate_email_templates templateID |
-validate_note_templates templateID }
The template_tools command contains options to list, manage, and validate document, email, and note templates.
See also
• Integration Guide
Option Description
-convert_dir directory Converts all templates in the specified directory to the new format.
-convert_file filename Converts the specified template to the new format.
-import_dir objectsfile fieldsfile directory Imports context objects and form fields from the provided CSV-
formatted files into all the templates in the specified directory. This
option has the following arguments:
• objectsfile – File containing the context objects for import, in CSV
format.
• fieldsfile – File containing the fields for import, in CSV format.
• directory– Directory that contains the templates to update.
-import_files objectsfile fieldsfile outfile Imports context objects and form fields from the provided CSV-
formatted files into the specified template descriptor file (outfile).
This option has the following arguments:
• objectsfile – File containing the context objects for import, in CSV
format.
• fieldsfile – File containing the fields for import, in CSV format.
• outfile – Template descriptor file to update.
-list_all_templates Lists all templates available for validation, includes document, email,
and note templates.
-list_doc_templates Lists the document templates available for validation.
-list_email_templates Lists the email templates available for validation.
-list_note_templates Lists the note templates available for validation.
-password password Password (password) to use to connect to the server. ClaimCenter
requires the password.
-server url Specifies the ClaimCenter host server URL. Include the port number and
web application name, for example:
http://serverName:8080/cc
Option Description
-validate_all_doc_templates Validates all document templates.
-validate_all_email_templates Validates all email templates.
-validate_all_note_templates Validates all note templates.
-validate_all_templates Validates all document, email, and note templates.
-validate_doc_template templateID Validates the specified document template with template ID of templat
eID.
-validate_email_template templateID Validates the specified email template with template ID of templateID.
-validate_note_template templateID Validates the specified note template with template ID of templateID.
-working_dir directory Specifies a directory for use as the root (working directory) for relative
paths.
You can also control workflows using the WorkflowAPI web service. See the Integration Guide.
Option Description
-complete workflowID Completes running workflow for the specified workflow (workflowID).
-password password Password (password) to use to connect to the server. ClaimCenter requires the password.
-resume workflowID Resume named workflow (workflowID) in the error or suspended state.
-resume_all Resume all workflows in the error or suspended state.
-server url Specifies the ClaimCenter host server URL. Include the port number and web application name, for
example:
http://serverName:8080/cc
The zone_import command imports data in CSV format from specified files into database staging tables for zone
data. It is only possible to import zone data for a single country at a time. The zone data files that you import must
contain zone data for a single country only. To load zone data for multiple countries, use the command multiple
times with different, country-specific zone data files each time.
Guidewire expects that you import address zone data upon first installing ClaimCenter, and then at infrequent
intervals thereafter as you receive data updates.
See also
• For a discussion of zone data, importing a zone data file, and working with custom zone data files, see “About
Importing Zone Data” on page 311.
• For more information on database staging tables, see the Integration Guide.
• For information on the web service ZoneImportAPI that also imports zone data, see the Integration Guide.
Option Description
-charset charset Character set encoding of the zone data file. The default is UTF-8.
-clearproduction Clears zone data from the production tables. Optionally, specify the -country option to clear data
for only one country.
-clearstaging Clears zone data from the staging tables. Optionally, specify the -country option to clear data for
only one country.
-country countrycode Used with -import, -clearproduction, and -clearstaging command options:
• If used with the -import option, -country specifies the country of the zone data in the import
file.
• If used with either the -clearproduction or -clearstaging options, -country specifies the
country of the zone data to clear from the tables.
-import filename Imports zone data from the specified file (filename). You must set a value for the -country
option.
If you include the optional -clearstaging option, ClaimCenter clears the data in the staging
tables for the specified country before importing the data from the import file.
-password password Password (password) to use to connect to the server. ClaimCenter requires the password.
-server url Specifies the ClaimCenter host server URL. Include the port number and web application name,
for example:
http://serverName:8080/cc