OSM Cloud-Native-System-Administrators-Guide
OSM Cloud-Native-System-Administrators-Guide
Service Management
Cloud Native System Administrator's Guide
Release 7.5
F60012-01
November 2023
Oracle Communications Order and Service Management Cloud Native System Administrator's Guide,
Release 7.5
F60012-01
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle®, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience xv
Documentation Accessibility xv
Diversity and Inclusion xv
iii
Setting Up a Caching Realm 3-5
Secure Credential Management 3-6
Using the EncryptPasswords Utility 3-6
About the EncryptPasswords Utility 3-6
Running the EncryptPasswords Utility 3-7
Removing an Encrypted Password 3-8
Using the Credential Store 3-8
About the Credential Store 3-8
How OSM Retrieves Credentials from the Credential Store 3-9
Managing Credentials in the Credential Store 3-9
Developing Cartridges to Use the Credential Store 3-9
Developing Automation Plug-ins to Use the Credential Store 3-10
Defining Data Providers in OSM Cartridges to Use the Credential Store 3-11
Using the Credential Store with Custom Data Providers 3-11
Using the Credential Store with Built-In Data Providers 3-12
Upgrading Existing Cartridge Code to Use the Credential Store 3-12
Using Built-in SOAP Adapter as a Data Provider Class 3-12
Administering Users and Workgroups 3-13
iv
Purge Performance 6-9
Shared Pool 6-9
Development and Testing Environments 6-9
Order Purge Strategies 6-10
Partition-Based Order Purge Strategy 6-10
Partition Purge Example 6-10
Advantages and Disadvantages of Partition-Based Purge 6-13
Row-Based Order Purge Strategy 6-14
Row-Based Order Purge Example 6-15
Advantages and Disadvantages of Row-Based Order Purge 6-16
Hybrid Purge Strategy 6-17
Partitioning Realms 6-17
Partitioning Realms Configuration 6-17
Mapping Orders to Partitioning Realms 6-19
Enabling and Disabling Partitioning Realms 6-21
Renaming a Partitioning Realm 6-22
Refreshing Partitioning Realms Configuration 6-22
Adding Partitions for New Partitioning Realms 6-23
partition_auto_creation Disabled 6-23
partition_auto_creation Enabled 6-23
Purge Strategy for Partitioning Realms 6-24
Default Partitioning Realm 6-24
Non-Partitioned Schemas 6-25
Order ID Blocks 6-25
Cartridge Management Strategy 6-26
Sizing Partitions 6-26
Sizing Hash Sub-Partitions 6-27
Sizing Range Partitions for Partition-Based Order Purge 6-27
Purge Performance 6-28
Estimating Storage 6-28
"All-In" Order Volume 6-30
Partition Size Restrictions 6-31
Retention Policy 6-31
Time-to-Close Wait 6-32
Oracle RAC 6-33
Purge Frequency 6-34
Sizing Range Partitions for Row-Based Order Purge 6-36
Sizing Range Partitions for Zero Downtime 6-37
Sizing Range Partitions for Infrequent Maintenance 6-38
Online vs. Offline Maintenance 6-38
Managing Order Data 6-39
v
Adding Partitions (Online or Offline) 6-40
Using Row-Based Order Purge 6-41
Purging a Single Order by Order ID 6-43
Purging Orders that Satisfy Given Criteria 6-43
Scheduling Order Purge 6-44
Stopping and Resuming an Order Purge 6-44
Using Partition-Based Order Purge 6-44
Differences Between Purging and Dropping Partitions 6-44
Purging Partitions (Online or Offline) 6-45
Purging Entire Partitions That Do Not Contain Open Orders (Online or Offline) 6-46
Purging Partitions Partially (Offline Only) 6-47
Dropping Partitions (Offline Only) 6-51
Dropping Empty Partitions (Online or Offline) 6-52
Reclaiming Unused Space in Volatile Tables 6-52
Order Purge Policies 6-53
Purging Related Orders Independently 6-53
Auditing and Monitoring Order Purges 6-55
Audit Tables 6-56
Managing Exchange Tables for Partition-Based Order Purge 6-56
About OSM Purge Tables 6-56
About OSM Backup Tables 6-58
Creating Exchange Tables (Online or Offline) 6-58
Purging Exchange Tables (Online or Offline) 6-59
Dropping Exchange Tables (Online or Offline) 6-59
Estimating Partition Disk Space (Online or Offline) 6-59
Managing Cartridges 6-60
Using Fast Undeploy 6-61
Purging Metadata of Undeployed Cartridges 6-61
Configuration Parameters 6-62
range_partition_size 6-63
subpartitions_number 6-63
default_xchg_capacity 6-63
xchg_retained_orders_thres 6-64
degree_of_parallelism 6-64
degree_of_parallelism_rebuild_indexes 6-64
degree_of_parallelism_rebuild_xchg_indexes 6-64
purge_job_class 6-65
parallel_execute_chunk_size 6-65
partition_auto_creation 6-65
purge_policy_rebuild_unusable_indexes 6-66
purge_policy_purge_related_orders_independently 6-66
vi
purge_policy_consolidate_partitions 6-66
purge_policy_time_to_close_wait 6-67
purge_audit_retention_days 6-68
deferred_segment_creation 6-68
purge_commit_count 6-69
About PL/SQL API 6-69
DBMS Output 6-69
Specifying Purge Criteria 6-69
Parallel Execution 6-71
Concurrency Restrictions 6-72
PL/SQL API Reference 6-73
Setup and Tuning Procedures 6-73
om_part_maintain.setup_xchg_tables (Online or Offline) 6-73
om_part_maintain.drop_xchg_tables (Online or Offline) 6-74
om_part_maintain.set_dop (Online or Offline) 6-74
om_part_maintain.set_dop_rebuild_indexes (Online or Offline) 6-74
om_part_maintain.set_dop_rebuild_xchg_indexes (Online or Offline) 6-74
Maintenance Procedures and Functions 6-74
om_part_maintain.add_partition (Offline Only) 6-74
om_part_maintain.add_partitions (Offline Only) 6-75
om_part_maintain.drop_partitions (Offline only) 6-75
om_part_maintain.drop_empty_partitions (Online or Offline) 6-77
om_part_maintain.purge_partitions (Online or Offline) 6-78
om_part_maintain.purge_entire_partition (Online or Offline) 6-82
om_part_maintain.estimate_ptn_purged_space (Online or Offline) 6-83
om_part_maintain.purge_xchg_bck_tables (Online or Offline) 6-84
om_part_maintain.purge_xchg_prg_tables (Online or Offline) 6-84
om_new_purge_pkg.delete_order (Online or Offline) 6-84
om_new_purge_pkg.purge_orders (Online or Offline) 6-84
om_new_purge_pkg.schedule_order_purge_job (Online or Offline) 6-86
om_new_purge_pkg.select_orders (Online or Offline) 6-87
om_new_purge_pkg.purge_selected_orders (Online or Offline) 6-88
om_new_purge_pkg.stop_purge (Online or Offline) 6-88
om_new_purge_pkg.resume_purge (Online or Offline) 6-88
Advanced Procedures 6-89
om_part_maintain.backup_selected_ords (Offline) 6-89
om_part_maintain.restore_orders (Offline) 6-90
Troubleshooting Functions 6-90
om_part_maintain.get_partitions (Online or Offline) 6-90
om_part_maintain.is_equipartitioned (Online or Offline) 6-91
Recovery Procedures 6-91
vii
om_part_maintain.equipartition (Offline only) 6-91
om_part_maintain.purge_orphan_order_data (Online or Offline) 6-91
om_part_maintain.rebuild_unusable_indexes (Online or Offline) 6-92
om_part_maintain.rebuild_index (Online or Offline) 6-92
om_part_maintain.sys$undo_restore_table (Offline) 6-93
om_part_maintain.sys$undo_restore_orders (Offline) 6-93
Database Reference 6-93
Database Views 6-94
OM_AUDIT_PURGE_ALL 6-94
OM_AUDIT_PURGE_LATEST 6-95
Database Tables 6-96
OM_AUDIT_PURGE 6-96
OM_AUDIT_PURGE_ORDER 6-97
OM_AUDIT_PURGE_PARAM 6-98
Troubleshooting and Error Handling 6-99
Error Handling for add_partitions 6-100
Error Handling for drop_partitions 6-100
Error Handling for purge_partitions 6-101
Troubleshooting 6-101
Error Handling 6-103
Error Handling for rebuild_unusable_indexes 6-104
Error Handing for setup_xchg_tables 6-104
Performance Tuning 6-105
Tuning degree_of_parallelism 6-105
Tuning degree_of_parallelism_rebuild_indexes 6-105
Tuning degree_of_parallelism_rebuild_xchg_indexes 6-106
Tuning Parallel Job Execution 6-106
Tuning parallel_execute_chunk_size 6-106
Tuning Row-Based Purge 6-107
Database Terms 6-107
viii
High Volatility Order Tables 7-3
Low Volatility Order Tables 7-4
Medium Volatility Order Tables 7-4
Enabling Incremental Statistics 7-5
Gathering High-Volatility-Table Statistics 7-5
Gathering Low-Volatility-Table Statistics 7-6
Preparing a New Partition 7-6
Populating New Partition Statistics 7-6
Using Statistics from Another Partition 7-6
Using Statistics from a Statistics Table 7-8
Using Statistics from Another System 7-8
Locking Partition Statistics 7-8
Configuring a Partition When It Is No Longer the Active Partition 7-9
Optimizer Statistics Error Handling Using Datapump 7-9
Optimizer Statistics Management Performance Tuning 7-9
Using Parallel Collection for Gathering Statistics 7-9
Cursor Invalidations 7-10
Optimizer Statistics Management PL/SQL API Reference 7-10
Setup and Tuning Procedures 7-10
om_db_stats_pkg.lock_volatile_order_stats 7-11
om_db_stats_pkg.unlock_volatile_order_stats 7-11
om_db_stats_pkg.set_table_prefs_incremental 7-11
om_db_stats_pkg.set_table_volatility 7-11
Maintenance Procedures 7-11
om_db_stats_pkg.gather_cartridge_stats 7-12
om_db_stats_pkg.gather_order_stats 7-12
om_db_stats_pkg.gather_volatile_order_stats 7-12
om_db_stats_pkg.copy_order_ptn_stats 7-12
om_db_stats_pkg.lock_order_ptn_stats 7-13
om_db_stats_pkg.unlock_order_ptn_stats 7-13
Advanced Procedures 7-13
om_db_stats_pkg.export_order_ptn_stats 7-13
om_db_stats_pkg.import_order_ptn_stats 7-14
om_db_stats_pkg.expdp_order_ptn_stats 7-15
om_db_stats_pkg.impdp_order_ptn_stats 7-15
Troubleshooting Procedures 7-16
om_db_stats_pkg.lstj_copy_order_ptn_stats 7-16
om_db_stats_pkg.get_order_ptn_stats 7-16
om_db_stats_pkg.list_order_ptn_stats 7-16
om_db_stats_pkg.check_order_ptn_stats 7-16
Recovery Procedures 7-17
ix
om_db_stats_pkg.remj_copy_order_ptn_stats 7-17
x
Adding Partitions 10-4
Importing OSM Model Data 10-5
Exporting and Importing the OSM Model and a Single Order 10-6
Exporting OSM Order Data 10-6
Preparing to Export Order Tables for a Single Order 10-6
Exporting Order Tables That Define an Order Sequence ID 10-7
Exporting the OSM Model Data 10-8
Importing the OSM Model and Order Data 10-8
Exporting and Importing a Range of Orders and the OSM Model 10-9
Exporting the OSM Order Data Range 10-9
Preparing to Export Order Tables for a Range of Orders 10-9
Exporting Order Tables That Define an Order Sequence ID 10-11
Exporting the OSM Model Data 10-12
Importing OSM Model and Order Data 10-12
Exporting and Importing a Range of OSM Orders Only 10-13
Exporting an Additional Range of Orders 10-13
Preparing to Export Order Tables for a Range of Orders 10-13
Exporting a Range of Orders from Order Tables That Define an Order Sequence ID 10-15
Importing an Additional Range of Orders 10-16
About Order Export Queries 10-16
Changing PAR File Parameters 10-17
About Import Parameters 10-19
Troubleshooting Export/Import 10-20
12 Troubleshooting OSM
Information You Need for Troubleshooting 12-1
General Checklist for Resolving Problems 12-1
Diagnosing Some Common Problems with OSM 12-2
Cannot Log in or Access Certain Functionality 12-2
System Appears Slow 12-2
Error: "Java.lang.StackOverflowError" when Using Task Web Client 12-2
Unexpected Logout from Web Client 12-3
Error: "Login failed. Please try again." 12-3
Automation Plug-ins Are Not Getting Called 12-3
Delayed JMS Messages 12-3
Error Message For Events From a JMS Topic in a Cluster 12-3
xi
Unexpected Values for JMS Properties 12-4
Unable to Bring Up Managed Server After Database Failure 12-4
JBoss Cache Timeouts 12-4
OSM Fails to Process Orders Because of Metadata Errors 12-4
DataDictionary Expansion Level 12-5
Quick Fix Button Active During Order Template Conflicts in Design Studio 12-5
Cannot Create New Orders on a New Cartridge Version 12-5
Error: "exact fetch returns more than requested number of rows" 12-6
Error: "unique constraint violated" 12-6
Getting Help with OSM Problems 12-6
Before You Contact Support 12-6
Reporting Problems 12-6
xii
About Purging Orders 14-23
Purging Orders with the orderPurge.bat Script on Windows 14-23
Running Ant with the orderPurge.xml file On UNIX or Linux to Purge Orders 14-26
About Migrating Orders 14-29
Configuring and Running an Order Migration 14-30
About Validating the Metadata Model and Data 14-32
Configuring and Running an XML Document Validation 14-32
Validating an XML Document During the Import or Export Process 14-33
About Creating a Graphical Representation of the Metadata Model 14-33
Configuring and Creating a Graphical Representation of a Metadata Model 14-33
Viewing the Graphical Representation 14-34
xiii
JMS Module C-2
Queues and Topics C-2
Quotas C-3
Connection Factories C-4
Destination Key C-4
JMS Template C-4
Data Sources C-5
Users and Groups C-5
Database Configuration C-6
xiv
Preface
This document describes system administration tasks for Oracle Communications Order and
Service Management (OSM) Cloud Native.
Audience
This document is intended for system administrators, system integrators, Database
Administrators (DBA), and other individuals who are responsible for managing OSM and
ensuring that the software is operating in a secure manner. This guide assumes that users
have a working knowledge of the relevant operating system, Oracle Database, Oracle
WebLogic Server, and Java J2EE software.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
xv
1
OSM System Administration Overview
This chapter provides an overview of Oracle Communications Order and Service
Management (OSM) system administration tasks.
1-1
2
Changing GUI Application Appearance and
Functionality
This chapter describes how to change the appearance and functionality of Oracle
Communications Order and Service Management (OSM) GUI applications.
About Behaviors
Behaviors provide the mechanism to exercise greater control over validation and presentation
of order data to Task, and Order Management web client users. (In earlier versions of OSM,
this capability was called the View Framework.) Behaviors are used mainly in the context of
manual tasks that you manage with the Task web client.
There are nine behavior types that enable you to dynamically control a specific aspect of the
order data model. (You can also add new behavior types using Oracle Communications
Service Catalog and Design - Design Studio). The included behavior types are:
• Calculation: Computes the value of a field value based on a formula that references
order data.
• Constraint: Specifies a condition that must be met for the data to be considered valid.
• Data Instance: Declares an instance that can be used by other behaviors.
2-1
Chapter 2
Changing Web Client Appearance and Functionality
4. Change the value of the session-timeout parameter to your desired value (in
minutes).
5. Save and close the file.
6. Repack oms.ear. See OSM Developer's Guide for more information.
7. Add the oms.ear file to the OSM container image. See "Creating OSM Cloud
Native Images" in OSM Cloud Native Deployment Guide.
Note:
Changing the session-timeout parameter changes the automatic timeout for
both the Order Management web client and the Task web client.
2-2
Chapter 2
Changing Web Client Appearance and Functionality
2-3
Chapter 2
Changing Web Client Appearance and Functionality
Setting Default Action on Orders and Tasks in the Task Web Client
You can specify a particular action as the default action to perform on orders and tasks
in the Worklist and Query screens for all users who have not set the default action in
the Options page of the Task web client. You can achieve this by configuring a
parameter in the oms-config.xml file. However, users can change the global default
action configured by the administrator to an action of their choice by setting the
Default Action field in the Options page. By default, the global default action
parameter is not set to any action and the application considers the option set by
individual users as the default action to perform.
To set a default action on orders and tasks for all users:
1. Update the oms-config parameters through the specification files. See "Working
with oms-config Parameters in OSM Cloud Native" for more information.
2. Set the DefaultUserAction parameter to the desired action.
2-4
Chapter 2
Changing Web Client Appearance and Functionality
2-5
Chapter 2
Changing Web Client Appearance and Functionality
.customReadOnlyValueRed {
FONT-SIZE: 14pt;
color:red;
}
2. In Design Studio, add the Style behavior for the read-only fields that you want to
customize:
For example:
• For the fields that you want to apply blue, add customReadOnlyValueBlue in
the CSS Class Name field of the Value grouping.
• For the fields that you want to apply red, add customReadOnlyValueRed in
the CSS Class Name field of the Value grouping.
3. Update the oms.ear file with the modified customScreen.css file.
4. Deploy the modified cartridges.
5. Add the oms.ear file to the OSM container image. See "Creating a Basic OSM
Cloud Native Instance" in OSM Cloud Native Deployment Guide for more details.
2-6
3
Setting Up OSM Security
This chapter describes how to set up security on your Oracle Communications Order and
Service Management (OSM) system.
Note:
If you use an external security implementation such as LDAP, you should also
use a caching realm to improve performance. See "Setting Up a Caching
Realm" for more information.
• Manage credentials securely using the EncryptPasswords utility or the Oracle Fusion
Middleware Credential Store Framework (CSF). See "Secure Credential Management."
• Manage workgroups. See OSM Order Management Web Client User's Guide for more
information.
For more information about WebLogic Server security realms, refer to the WebLogic Server
Console documentation.
Note:
OSM supports LDAP Version 2.
3-1
Chapter 3
Adding Users to OSM
Note:
You assign individual users to roles using the Administration area of the
Order Management web client. See OSM Order Management Web Client
User's Guide for more information.
Table 3-1 describes the client functions to which you provide access.
3-2
Chapter 3
Adding Users to OSM
In addition to granting web client permissions, you can also grant permissions at the order
level (by associating a role to an order type) and the task level.
See the discussion about creating new roles in Design Studio Modeling OSM Processes for
more information. After you create a role, you must assign permissions to the role entities.
See "Role Editor Role Tab" in Design Studio Modeling OSM Processes for more information
about permissions for role entities.
3-3
Chapter 3
Using WebLogic Server Authenticators with OSM
Note:
If multiple authentication providers are configured in WebLogic Server, all the
authentication providers (even if they are configured as optional in WebLogic
Server) should be active. If any of the authentication providers is not active,
an exception will be raised and the users will not be able to log in to OSM.
3-4
Chapter 3
Setting Up a Caching Realm
Note:
If a user is assigned (directly or indirectly) to multiple groups which have different
query tasks for the same order, it is not predictable which query task view the user
will see when querying the order.
3-5
Chapter 3
Secure Credential Management
From this window, you can change the settings for your realm.
For more information about setting up security in WebLogic Server, see Oracle Fusion
Middleware Administering Security for Oracle WebLogic Server.
3-6
Chapter 3
Secure Credential Management
Ant build files for the EncryptPasswords utility are located in the following directories:
• SDK/CartridgeManagementTool/production
• SDK/CartridgeManagementTool/development
The Ant build files have targets corresponding to each of the batch files in the SDK/
XMLImportExport directory that include the EncryptPasswords functionality.
Note:
To run the EncryptPasswords utility, you must have write access to the XML files
in which the XML Import/Export application user credentials are stored.
Note:
The configuration file must contain the <user> and <password> elements, but
the values of these elements do not matter because the script will prompt for
these values.
for example:
EncryptPasswords.sh config/config.xml -dbUser
• Windows:
Run the following command:
EncryptPasswords XMLConfig [-dbUser] [-xmlapiUser] [-wlsUser]
for example:
EncryptPasswords config\config.xml -dbUser -xmlapiUser
3-7
Chapter 3
Secure Credential Management
where XMLConfig indicates the configuration XML file you created in the previous
step, and the following optional parameters indicate which passwords should be
encrypted:
• -dbUser: the OSM database password
• -xmlapiUser: the XML API interface password
• -wlsUser: the WebLogic domain administration server password
When you set a user's credentials, you specify only the systems that they use for
the XML Import/Export application operations they perform. For example, if the
user only imports or exports cartridges, you only need to specify the -dbUser flag.
3. When prompted by the script, enter the user names and passwords that you
selected when running the script.
When you type in passwords, nothing will be displayed on the screen.
3-8
Chapter 3
Developing Cartridges to Use the Credential Store
For information about Platform Security Services and managing credentials in the credential
store, see "Oracle Fusion Middleware Securing Applications with Oracle Platform Security
Services."
3-9
Chapter 3
Developing Cartridges to Use the Credential Store
When you develop OSM cartridges, Oracle recommends you use the credential store
to allow plug-in code to access credential information in a secured way. You can use
the OSM credential store APIs for code that requires credential retrieval.
OSM uses the credential store offered through WebLogic Server; however, you are not
required to use this credential store to secure credentials. You can use other methods
of securing credentials. Oracle strongly recommends you do not hard-code user
credential information in OSM code such as in plug-in script files and cartridge model
description files. Passing and storing passwords in plain text poses a security risk.
Follow proper security guidelines to develop OSM cartridges to protect data over
communication channels. Oracle recommends using SSL communication between
OSM and an external system, particularly for web services of external systems.
The following are examples of external systems used in OSM cartridges that may
require credential information:
• OSM Web Service
• Databases
• JMS queues and topics (except JMS queues deployed by the cartridge)
• Web services of any system
To develop your OSM cartridges to use the credential store, see the following:
• Use "AutomationContext" in your automation plug-in code to retrieve credentials
from the credential store. See "Developing Automation Plug-ins to Use the
Credential Store" for more information.
• Use the operation APIs in "ViewRuleContext" in XQuery scripts to access
credentials stored in the credential store.
• Use "PasswordCredStore " in your JAVA classes to retrieve user names and
passwords from the credential store.
• Use the attributes for credential store in "SoapAdapter" to retrieve credentials from
the credential store when sending a SOAP request using HTTP/HTTPS.
• Use the attributes for credential store in "ObjectelHTTPAdapter" to retrieve
credentials from the credential store when sending a request to Objectel. See
"Defining Data Providers in OSM Cartridges to Use the Credential Store" for more
information.
• See "OSM Credential Store API Command Reference" for a description of the
OSM credential store APIs.
See "Using the Credential Store" for information about the credential store.
3-10
Chapter 3
Developing Cartridges to Use the Credential Store
External instance adapters and automation plug-in classes (XQuerySender and XSLTSender)
provided by Oracle to send messages and requests to external systems support the OSM
credential store APIs.
Note:
This example assumes you are using your own map. If you use the default map
(osm) and key names for the OSM application, you can use simpler code:
String password = context.getOsmCredentialPassword(username)
3-11
Chapter 3
Upgrading Existing Cartridge Code to Use the Credential Store
3-12
Chapter 3
Administering Users and Workgroups
3-13
4
Configuring OSM with oms-config.xml
This chapter explains how to configure Oracle Communications Order and Service
Management (OSM) using the oms-config.xml file and provides a detailed reference of
available parameters.
oms-config.xml Parameters
Table 4-1 describes the parameters that can be configured in the oms-config.xml file.
4-1
Chapter 4
oms-config.xml Parameters
4-2
Chapter 4
oms-config.xml Parameters
4-3
Chapter 4
oms-config.xml Parameters
4-4
Chapter 4
oms-config.xml Parameters
4-5
Chapter 4
oms-config.xml Parameters
4-6
Chapter 4
oms-config.xml Parameters
4-7
Chapter 4
oms-config.xml Parameters
4-8
Chapter 4
oms-config.xml Parameters
4-9
Chapter 4
oms-config.xml Parameters
4-10
Chapter 4
oms-config.xml Parameters
4-11
Chapter 4
oms-config.xml Parameters
4-12
Chapter 4
oms-config.xml Parameters
4-13
Chapter 4
oms-config.xml Parameters
4-14
Chapter 4
oms-config.xml Parameters
4-15
Chapter 4
oms-config.xml Parameters
4-16
Chapter 4
oms-config.xml Parameters
4-17
Chapter 4
oms-config.xml Parameters
4-18
Chapter 4
oms-config.xml Parameters
4-19
Chapter 4
oms-config.xml Parameters
4-20
Chapter 4
oms-config.xml Parameters
4-21
Chapter 4
oms-config.xml Parameters
4-22
Chapter 4
oms-config.xml Parameters
4-23
5
Configuring the Task Processor
This chapter describes how to configure the task processor to improve performance and
handle rule-processing errors for Oracle Communications Order and Service Management
(OSM).
In most cases, you can use the default configuration for the task processor.
5-1
Chapter 5
Configuring the Task Processor for Performance
If there is a backlog of rule or delay tasks, you can increase the number of rule or
delay task processors.
OSM will adjust the number of rule and delay task processors to use no more than
10% of the connection pool size that is configured for the WebLogic instance. The
adjusted numbers are written in the managed server's log file. If the adjusted number
of rule and delay task processors does not meet your performance requirement,
increase the connection pool size or decrease the parameter
oracle.communications.ordermanagement.RuleDelayTaskPoller.Interval.
5-2
6
Managing the OSM Database Schema
This chapter describes how to manage an Oracle Communications Order and Service
Management (OSM) database schema.
6-1
Chapter 6
Creating Tablespaces
• "Order Purge Strategies": Helps you decide on an order purge strategy. This is one
of the most important decisions that you must make, not only before going into
production but also before you start performance testing.
• "Partitioning Realms": Discusses the use of logical partitions in the OSM schema
to assist in purging or dropping orders.
• "Cartridge Management Strategy": Recommends a strategy for managing
cartridges.
• "Sizing Partitions": Discusses how to size partitions for order data. Partition sizing
depends on your order purge strategy.
• "Online vs. Offline Maintenance": Gives a brief overview of which maintenance
operations can be performed online.
Creating Tablespaces
The OSM DB Installer expects the following permanent database tablespaces
specified in the project specification:
db:
type: "STANDARD" # Acceptable values are STANDARD and ADB
# datasourcesPrimary section is applicable only for STANDARD DB. For
ADB, values will be used from Autonomous Database Serverless
secrets+configMap.
datasourcesPrimary:
port: 1521
# If not using RAC, provide the DB server hostname/IP address
# If using RAC, comment out "#host:"
#host: dbserver-ip
#
# If using RAC, provide list of SCAN hostname/IP addresses
# If not using RAC, comment out "#scans:"
#scans:
# - scan1-ip
# - scan2-ip
#
# If using RAC, provide either a list of VIP hostname/IP addresses
# or a list of INSTANCE_NAMES
# If not using RAC, comment these out "#vips:" and "#instances:"
#
#vips:
# - vip1-ip
# - vip2-ip
# --- OR ---
#instances:
# - instance-1
# - instance-2
6-2
Chapter 6
Creating Tablespaces
# CONFIG
# FINE
# FINER
# FINEST (lowest value)
#
logLevel: "WARNING"
#
# The remaining parameters must match the values used when the PDB was
# created. Failure to match will result in dbInstaller errors
#
# The default tablespace name of OSM schema
defaultTablespace: "OSM"
# The temporary tablespace name of OSM schema
tempTablespace: "TEMP"
# The time zone offset in seconds
timezoneOffsetSeconds: "-28800"
# The model data tablespace name of OSM schema
modelDataTablespace: "OSM"
# The model index tablespace name of OSM schema
modelIndexTablespace: "OSM"
# The order data tablespace name of OSM schema
orderDataTablespace: "OSM"
# The order index tablespace name of OSM schema
orderIndexTablespace: "OSM"
You can choose different tablespaces or a single tablespace. Typically model data and
indexes are separate from order data and indexes.
If your schema is partitioned, you can also create new table partitions in different tablespaces
for increased administration and availability, for example on a rotation basis. If a tablespace is
damaged, the impact and restoration effort could be limited to one or just a few partitions.
See "Adding Partitions (Online or Offline)" for more information.
Oracle recommends the following:
• Create tablespaces dedicated to OSM, so that OSM performance and availability are not
affected by other applications, for example due to I/O contention or if a tablespace must
be taken offline. Store the datafiles of these tablespaces on different disk drives to reduce
I/O contention with other applications.
• Create locally managed tablespaces with automatic segment space management by
specifying EXTENT MANAGEMENT LOCAL and SEGMENT SPACE MANAGEMENT
AUTO in the CREATE TABLESPACE statement. Both options are the default for
permanent tablespaces because they enhance performance and manageability.
• Configure automatic database extent management by using the AUTOALLOCATE clause
of the CREATE TABLESPACE statement. This is the default. For production
deployments, avoid UNIFORM extent management for OSM order data and indexes
because the volume of data varies widely from table to table.
• If you use smallfile tablespaces, do not create hundreds of small datafiles. These files
need to be checkpointed, resulting in unnecessary processing. Note that Oracle
Database places a limit on the number of blocks per datafile depending on the platform.
The typical limit is 222-1, which limits the datafile size to 32GB for 8k blocks.
Additional considerations if you use bigfile tablespaces:
6-3
Chapter 6
Using Partitioning
Using Partitioning
OSM database partitioning enhances the performance, manageability, and availability
of data in an OSM deployment.
The OSM DB Installer enables partitioning automatically. The following figure provides
details about the OSM partition tables that accumulate order-related information using
range partitioning based on OSM order ID ranges.
The OM_ORDER_HEADER table stores a synopsis for each order, such as the order
ID, priority, state, milestone timestamps, and so on. This table is range-hash
partitioned by order ID. More precisely:
6-4
Chapter 6
Using Partitioning
• The non-inclusive upper bound of each range partition is an order ID. For example, if the
upper bound of the first partition is 1,000,001 and partitions are sized to contain
1,000,000 order Ids each, the first partition contains orders with an order ID between 1
and 1,000,000, the next partition contains orders with an order ID between 1,000,001 and
2,000,000, and so on.
• Hash sub-partitioning reduces I/O contention. In production deployments, range partitions
typically have 16, 32, or 64 sub-partitions.
• Range partition names are generated by OSM and they include the partition upper
bound. For example, the upper bound of partition P_000000000001000001 is 1,000,001.
Sub-partition names are generated by Oracle Database (for example, SYS_P2211001).
The rest of the tables that accumulate order data are equipartitioned with
OM_ORDER_HEADER. They are either range-hash partitioned or reference partitioned.
You can use the partitioning realms feature to separate orders with different operational
characteristics into different partitions. See "Partitioning Realms" for more information.
For more information about the different types of partitioning, see Oracle Database VLDB and
Partitioning Guide.
Benefits of Partitioning
Partitioning your OSM schema allows you to subdivide tables and indexes into smaller
segments. This provides the following benefits:
• Improved Manageability
• Increased Availability
• Increased Concurrency
• Support for Active-Active Oracle RAC
• Increased Query Performance (for certain queries)
Improved Manageability
Improved manageability is the result of partition independence, plus managing smaller
segments is easier, faster and less resource intensive. The benefits increase with the schema
size, because partitioning allows you to divide large tables and indexes into smaller more
manageable segments.
You can purge several weeks' worth of order data in minutes by dropping or purging a
partition, without affecting other partitions. You can set up routine purges of older partitions
containing obsolete and complete orders, while creating and bringing on-line new partitions
as existing partitions fill up.
Data Definition Language (DDL) operations on tables and indexes are less resource intensive
and less likely to fail if they are performed one partition at a time. For example, consider a
128 GB non-partitioned index and a partitioned index of the same size with 32 partitions. The
partitioned index can be rebuilt one partition at a time (32 transactions), whereas rebuilding
the non-partitioned index is a single transaction that requires 32 times more free space. If the
rebuild fails, the entire transaction is rolled back.
6-5
Chapter 6
Using Partitioning
Increased Availability
Partitioning increases availability mainly by reducing downtime in the event of error.
For example, the time to recover from a corruption in a large table could be reduced
significantly if that table was partitioned and the corruption was isolated to a single
partition.
Increased Concurrency
Hash sub-partitioning increases concurrency by reducing I/O contention, specifically
by spreading DML statements (data modifications) over several physical sub-
partitions. Contention is usually manifested as "buffer busy" waits in Automatic
Workload Repository (AWR) reports.
Note that range partitioning does not help with contention because new orders are
created in the same range partition. The only exception is if you use active-active
Oracle RAC, in which case order creation is spread over two or more range partitions.
6-6
Chapter 6
Using Partitioning
• All database transactions on an order that take place on a single managed server also
take place on a single database instance. This minimizes cluster waits by avoiding buffer
cache transfers among database instances.
• The workload is balanced across Oracle RAC nodes and WebLogic Managed Servers.
Figure 6-2 OSM Data Source Configuration for Active-Active Oracle RAC
6-7
Chapter 6
Using Partitioning
The downside of creating orders concurrently in N partitions is that some order Ids are
skipped, which causes partitions to be only partially filled when they are considered
exhausted. For example, Figure 3 shows that MS01 and MS03 skip order Ids 2, 6, 10,
14, and so on. This is because these are mapped to slots owned by MS02 and MS04.
However, MS02 and MS04 do not generate those order Ids because they generate
order Ids from a different range. As a result, each partition is only half-full when it is
exhausted.
The overall size of order ID gaps depends on the number of Oracle RAC nodes,
regardless of how the load is balanced across those nodes. For example, when you
remove managed server MS4 from the cluster of the previous example, so that each
managed server processes 1/3 of the load, the managed servers are still divided into
two groups. This means that partition P_00000000000100001 contains 2/3 of the
order Ids and P_00000000000200001 contains the remaining 1/3. Thus, when
P_00000000000100001 is exhausted, it will be 1/3 empty. Because MS2 skips slots
assigned to MS1 and MS3, its partition will be exhausted at about the same time and it
will be 2/3 empty. Although the two Oracle RAC nodes are not balanced (they process
2/3 and 1/3 of the load each), on average both partitions are half empty.
In summary, if you switch from a single database to N-node active-active Oracle RAC,
the number of partitions increases N-fold, whereas the actual number of order Ids
stored in a partition decreases N-fold. Storage consumption is about the same.
For more information, refer to the OSM High-Availability Guidelines and Best Practices
in the OSM Installation Guide.
Pitfalls of Partitioning
Tables that store order data are equipartitioned with OM_ORDER_HEADER. The rest
of the tables are not partitioned. Therefore, the number of physical partitions in an
OSM schema is at least T x R x H, where T is the number of partitioned tables, R is
the number of OM_ORDER_HEADER range partitions, and H is the number of hash
sub-partitions per range partition (excluding LOB index partitions). For example, an
OSM schema with 48 order tables, 10 OM_ORDER_HEADER range partitions, and 32
hash sub-partitions has at least 15360 physical partitions. If you let the number of
physical partitions increase unchecked, you are likely to run into the performance
problems discussed below, even if the space used by most partitions is very small. It is
recommended that you review the "Sizing Partitions" section for sizing guidelines.
6-8
Chapter 6
Using Partitioning
Purge Performance
A very large number of partitions could significantly slow down partition purge. Experience
shows that the tipping point is around 300,000 physical partitions, although this varies
depending on the specific OSM installation environment.
The time to purge a partition using EXCHANGE PARTITION operations depends on the
number of hash sub-partitions. For example, if you decrease the number of sub-partitions
from 64 to 32, the duration of the EXCHANGE PARTITION stage of the purge decreases to
nearly half.
A partitioned table is considered in the library cache as one object, regardless of the number
of partitions. Partition purge operations use DDL statements, which invalidate the cursors
associated with the underlying partitioned tables. When a cursor is re-parsed, all the partition
metadata for the underlying tables must be reloaded and the amount of time increases with
the number of partitions. This is less of an issue when you drop a partition, because the
DROP PARTITION statement is cascaded. However, partition purge also uses EXCHANGE
PARTITION, which is not cascaded in 11g. A partition purge runs several exchange
operations per reference-partitioned table, causing repeated metadata reloads that could
significantly slow down the purge (for example, from minutes to hours).
Shared Pool
Oracle Database stores partitioning metadata in the data dictionary, which is loaded in the
row cache of the shared pool. A very large number of partitions could stress the shared pool.
If the shared pool is undersized, you could run into ORA-4031 errors (unable to allocate
shared memory), especially while purging partitions.
6-9
Chapter 6
Order Purge Strategies
6-10
Chapter 6
Order Purge Strategies
6-11
Chapter 6
Order Purge Strategies
Figure 6-5 shows the second maintenance window (after 2 weeks). Because a
partition contains 2 weeks' worth of orders, P5 is exhausted and P6 is half full. As in
the previous maintenance, it is cost-effective to purge only one partition that has not
yet been purged, which is P2 in this example. You also add at least one partition (not
shown). Notice that the number of partitions that are not purged is 3.5 again.
Figure 6-6 shows the third maintenance window. As in the previous maintenance, it is
cost-effective to purge only one partition (P3) that has not yet been purged, and 3.5
partitions are not purged. This time, however, the previously purged partitions P1 and
P2 are purged again and consolidated with P3 (3-to-1 consolidation). The number of
orders retained with this consolidation is less than 3.2%, which is less than the 5%
tolerance limit. Periodic consolidation is recommended to maintain a low number of
partitions and maximize reclaimed space.
6-12
Chapter 6
Order Purge Strategies
6-13
Chapter 6
Order Purge Strategies
• If you have very high volumes of orders and you cannot afford frequent downtime,
the large storage size could become hard to manage.
• This strategy does not work well if you have a mix of long-lived orders and a
relatively high-volume of short-lived orders. Because the high-volume orders
reside together with long-lived orders, the latter dictate the purge strategy. Unless
long-lived orders are a fraction of the total volume, it might not be cost-effective to
purge a partition soon after all short-lived orders are closed. (This is because
retaining a large number of long-lived orders would increase considerably the
purge time and therefore the downtime.) Also, as explained in "Pitfalls of
Partitioning," if you let the number of partitions increase significantly, performance
of partition purge operations suffers. In this case, consider a hybrid purge strategy
or row-based order purge.
To mitigate the disadvantages of this strategy, choose the partition size carefully and
adjust it as conditions change. As a rule of thumb, size your partitions to contain as
many orders as will be purged in one purge maintenance window. For sizing
guidelines, refer to the "Sizing Range Partitions for Partition-Based Order Purge"
section.
6-14
Chapter 6
Order Purge Strategies
Partitions should be sized with a wide range of order Ids to store several months' worth of
orders. Partition sizing depends on the retention policy, the order life time, and how you
reclaim the space occupied by old partitions. This leads to the following two variations of this
strategy:
• Zero downtime: If you have a strict 24x7 requirement, you could delete orders from old
partitions until they are empty, which enables you to drop them online. The downside is
that it might be a long wait before you can fully reclaim that space by dropping the empty
partitions, especially if you have orders that remain open for several months or even
years (as mentioned earlier, deletes do not lower the segment high water mark and the
space freed by deletes cannot be used for new orders because new orders are created
on the latest partition).
• Infrequent maintenance (for example, once a year): If you have a near 24x7
requirement with occasional maintenance windows, you could use those windows to drop
or purge old partitions offline.
The following sections provide additional information about this strategy:
• Row-Based Order Purge Example
• Advantages and Disadvantages of Row-Based Order Purge
• Using Row-Based Order Purge
6-15
Chapter 6
Order Purge Strategies
6-16
Chapter 6
Partitioning Realms
Partitioning Realms
You can use the partitioning realms feature to separate orders into different partitions when
those orders have different operational characteristics, for example, short-lived orders versus
long-lived orders. This enables you to group orders into partitions that can be purged or
dropped more effectively.
The partitioning realms feature is used for logical partitioning, which is supported for even
non-partitioned schemas.
A new order that is received by OSM requires an order_seq_id before it is persisted in the
database. Each partitioning realm reserves ranges of order_seq_id values to be used by
orders that are assigned to that realm. Mapping rules within each partitioning realm
determine which orders are assigned to the partitioning realm. If an order does not map to
any partitioning realm it is assigned to a pre-defined default realm. The default realm is called
default_order.
Orders automatically map to default realms that are pre-defined in the database. You do not
have to configure partitioning realms unless you have a specific requirement to do so. For
more information, see "Default Partitioning Realm."
Note:
The partitioning realms files must be located in the same file system as the oms-
config.xml file.
6-17
Chapter 6
Partitioning Realms
Sample order partitioning realms configuration files are provided in the OSM SDK in
the SDK/Samples/PartitioningRealms directory. An XML Schema Definition (XSD)
for partitioning realms is also located in this directory.
Example 6-1 is a sample partitioning realm file. This sample file contains only one
order partitioning realm, but files can contain configuration for multiple realms, if
required.
Example 6-1 Sample Order Partitioning Realms File
<partitioningRealmModel xmlns="http://xmlns.oracle.com/communications/
ordermanagement/partitioningRealms" realmType="ORDER">
<partitioningRealm name="sample_order_realm" enabled="true">
<description>Sample for demonstrating realm configuration</description>
<purgeStrategy>ANY</purgeStrategy>
<parameters>
<rangePartitionSize>500000</rangePartitionSize>
</parameters>
<mappings>
<includes>
<cartridgeNamespace>MyTestCartridge</cartridgeNamespace>
<orderName>MyOrder</orderName>
</includes>
</mappings>
</partitioningRealm>
</partitioningRealmModel>
Table 6-1 lists and describes the elements and attributes that can be included in the
partitioning realms file.
6-18
Chapter 6
Partitioning Realms
6-19
Chapter 6
Partitioning Realms
<includes>
<cartridgeNamespace>Mobile.*</cartridgeNamespace>
</includes>
</mappings>
</partitioningRealm>
</partitioningRealmModel>
The following fields can be used as match criteria in the partitioning realms mapping
file:
• Cartridge Namespace
• Cartridge Version
• Order Name
Note:
The cartridge namespace and version are the deployed cartridge namespace
and version. For standalone cartridges, use the standalone cartridge
namespace and version. For composite cartridges, use the composite
cartridge namespace and version.
Using the cartridge version in mappings is not recommended because
mappings can be broken when cartridges are upgraded and will then require
more frequent maintenance.
If there are multiple includes elements, only one of the includes criteria must match
in order for the realm to map. In other words, multiple includes elements are part of
an OR condition.
For example, with the following configuration the short_lived_orders partitioning
realm is mapped to orders from both Mobile and Broadband cartridges.
<partitioningRealmModel realmType="ORDER">
<partitioningRealm name="short_lived_orders">
<mappings>
6-20
Chapter 6
Partitioning Realms
<includes>
<cartridgeNamespace>Mobile</cartridgeNamespace>
</includes>
<includes>
<cartridgeNamespace>Broadband</cartridgeNamespace>
</includes>
</mappings>
</partitioningRealm>
</partitioningRealmModel>
When there are multiple partitioning realms defined in a configuration file, they are processed
sequentially. The first partitioning realm to match the order data is used. In the following
example, the long_lived_orders realm would never be mapped because
short_lived_orders realm maps to all cartridges starting with "Mobile".
<partitioningRealmModel realmType="ORDER">
<partitioningRealm name="short_lived_orders">
<mappings>
<includes>
<cartridgeNamespace>Mobile.*</cartridgeNamespace>
</includes>
</mappings>
</partitioningRealm>
<partitioningRealm name="long_lived_orders">
<mappings>
<includes>
<cartridgeNamespace>MobileCartridge</cartridgeNamespace>
</includes>
</mappings>
</partitioningRealm>
</partitioningRealmModel>
6-21
Chapter 6
Partitioning Realms
You can enable and disable realms by changing the enabled attribute in the
partitioning realm configuration to true (enabled) or false (disabled). If a partitioning
realm is disabled, it is no longer used for mapping incoming orders, therefore any
partition or order ID block that is assigned to that partitioning realm is no longer used.
Enabled partitioning realms cannot be removed from the partitioning realm
configuration file. If an enabled partitioning realm is removed from the configuration
file, a validation error occurs the next time you restart the system or refresh OSM
metadata.
You cannot disable the default partitioning realm: default_order.
To remove a partitioning realm from the configuration file:
1. In the configuration file, set the enabled attribute to false to disable the
partitioning realm.
2. Do one of the following:
• Restart OSM.
• Refresh the OSM metadata.
The partitioning realm is disabled in the database.
3. Remove or comment out the realm XML configuration that you want to remove.
Note:
This removes the XML configuration for the realm, however the realm
still exists in the database. You cannot remove partitioning realms from
the database.
After you change the name in the XML configuration file, refresh the OSM metadata.
You cannot rename the default partitioning realm: default_order.
6-22
Chapter 6
Partitioning Realms
cause of the validation error in the log files. The problem in the partitioning realms
configuration must be fixed before you attempt to restart OSM.
• If the server is already running, an invalid partitioning realms configuration causes errors
in the logs and the loading of the partitioning realms configuration is abandoned. The
OSM server continues to run using the last known good realm configuration. In other
words, the partitioning realm configuration changes are ignored. You can find the cause
of the validation error in the log files. The problem in the partitioning realm configuration
must be fixed before you attempt to refresh the OSM metadata.
partition_auto_creation Disabled
If partition_auto_creation is disabled, partitioning realms must be created in a disabled
state before adding partitions.
To add partitions with the partition_auto_creation attribute disabled:
1. Create the new partitioning realm by setting the enabled attribute to false in the
partitioning realm XML configuration file.
2. Refresh the OSM metadata or restart OSM.
This creates the partitioning realm in the system in a disabled state.
3. Add one or more partitions for the new (disabled) realm using the
om_part_maintain.add_partitions procedure. In addition to the name of the new
partitioning realm, you must set the a_force argument to true to create a partition for a
disabled partitioning realm. If you do not enter true for the a_force argument, an
exception is raised because, by default, partitions cannot be added to disabled realms.
exec om_part_maintain.add_partitions(a_count, a_tablespace, a_realm_mnemonic, true)
4. When you are ready to use the new partitioning realm, set the enabled attribute to true in
the partitioning realm XML configuration.
5. Refresh the OSM metadata or restart OSM.
This enables partitioning realm in the system.
partition_auto_creation Enabled
If partition_auto_creation is enabled, you can create partitioning realms in an enabled state.
Partitions are created automatically for the partitioning realm.
6-23
Chapter 6
Partitioning Realms
6-24
Chapter 6
Partitioning Realms
Non-Partitioned Schemas
You can use partitioning realms in non-partitioned environments to improve the performance
of row-based purging. Grouping orders based on how long they take to close allows orders to
be purged together. Purging sequential orders reduces database IO during the purge.
Order ID Blocks
Use the OM_ORDER_ID_BLOCK table to determine which orders belong to which realms.
When new partitions are added, a new row is inserted into this table to track the order ID
range and associated partitioning realm.
The following example query lists all the block ranges and their associated partitioning realm:
select b.first_order_id,
b.last_order_id,
r.mnemonic,
b.status,
r.realm_type,
b.dbinstance
from om_order_id_block b,
om_partitioning_realm r
where b.realm_id = r.realm_id
order by b.last_order_id;
Where:
• first_order_id: The first order ID in the block.
• last_order_id: The last order ID in the block.
• mnemonic: The name of the partitioning realm associated with the block.
• status: One of USED, ACTIVE, or AVAILABLE.
– ACTIVE: A block of IDs that is actively being used for new orders.
– USED: A block of IDs that is no longer being used to generate new IDs.
– AVAILABLE: A new, empty block that is available for use.
• realm_type: Must be ORDER.
• dbinstance: The number of the Oracle RAC instance. A value of -1 means the block is
unassigned to an Oracle RAC node.
To determine which realm a specific order is associated with, run the following query
(replacing order_seq_id with the order that you want to query):
select r.mnemonic,
r.realm_type
from om_order_id_block b,
om_partitioning_realm r
where order_seq_id between b.first_order_id and b.last_order_id
and b.realm_id = r.realm_id;
6-25
Chapter 6
Cartridge Management Strategy
Sizing Partitions
The following values, which you enter on the Database Schema Partition
Information installer screen, specify the size and number of partitions created during
and after an OSM installation.
• Orders per Partition: Specifies the number of orders that the Oracle Database
allows in a range partition. This is also referred to as the range partition size.
• Number of Sub-partitions: Specifies the number of hash sub-partitions in a range
partition.
You can change the values that you selected during the installation process by
updating the range_partition_size and subpartitions_number OSM database
parameters.
You can configure different range_partition_size parameters for each partitioning
realm. This allows you to tailor the partition size for the partitioning realm purge
strategy. For example, you could use row-based purged with oversized partitions for a
realm defined for high-volume, short-lived orders, and partition-based purge with
smaller partitions for low-volume, long-lived orders. You configure the realm-specific
range_partition_size using partitioning realm configuration files. For more
information, refer to the configuration section of "Partitioning Realms."
6-26
Chapter 6
Sizing Partitions
Updates to these parameters do not affect existing partitions. For more information about
these parameters, see "Configuration Parameters."
Sizing of partitions depends on the purge strategy and several other factors, as discussed in
the following sections:
• Sizing Hash Sub-Partitions
• Sizing Range Partitions for Partition-Based Order Purge
• Sizing Range Partitions for Row-Based Order Purge
6-27
Chapter 6
Sizing Partitions
Purge Performance
The main factors that affect partition-based purge downtime are purge frequency and
purge performance.
There are a number of ways to improve purge performance:
• If range partitions are undersized, according to the guideline that each range
partition should ideally contain as many orders as will be purged in one purge
maintenance window, consider increasing the partition size. For example, the time
to purge a 200 GB partition is only slightly more than the time to purge a 100 GB
partition. This guideline also helps minimize partition consolidations.
• Decrease the number of hash sub-partitions. The time to purge a 200 GB partition
with 64 hash sub-partitions is nearly double the time to purge a 200 GB partition
with 32 sub-partitions. For more information refer to "Pitfalls of Partitioning" and
"Sizing Hash Sub-Partitions."
• Decrease the overall number of physical partitions. For more information refer to
"Pitfalls of Partitioning."
• Increase the time-to-close wait to reduce the number of retained orders.
• Tune purge operations, for example increase the degree of parallelism.
• Tune the database and operating system.
• Tune storage. For example, consider enabling more storage ports or converting
disks from RAID-5 to RAID-10, which has better write performance.
• If, after exhausting all of the above options, performance is still inadequate,
consider hardware upgrades depending on the nature of the bottleneck (for
example, CPU, RAM, I/O bandwidth).
Estimating Storage
To determine the size of partitions, you need to also consider the amount of storage
that is allocated to OSM. This is necessary to provision sufficient storage capacity,
ensure that you are comfortable managing it, and validate the trade-off between
storage and the frequency and duration of maintenance windows (outages).
6-28
Chapter 6
Sizing Partitions
It is recommended that you conservatively estimate the amount of required storage. Consider
possible changes in sizing criteria and purge frequency, such as a volume or order size
increase due to a rollout of new services, orders requiring more space due to additional
functional requirements introduced during a solution or product upgrade or a purge embargo
during holidays. Add contingency for unforeseen events that might delay purging or lead to
increased space consumption.
For the purpose of estimating minimum storage requirements, consider the following partition
breakdown:
• The oldest partitions that have been purged at least once.
• Partitions that have never been purged, including exhausted partitions and the latest
partition(s) where new orders are created. (If you use Oracle RAC with N nodes in active-
active mode, orders are created concurrently on N partitions as explained in the Oracle
RAC section.)
The oldest partitions that have been purged at least once normally contain a small number of
orders. It is recommended that you consolidate these partitions regularly (every few purges).
If you do, the total space consumed by those partitions should be a fraction of a single
partition.
Partitions that have never been purged consume the bulk of your storage. The number of
these partitions depends on the partition size, the order retention period, the time-to-close
wait, the purge frequency and whether you use active-active Oracle RAC. At the time of
purge, these partitions can be further distinguished as eligible and ineligible for purge. If you
follow a regular schedule, you can estimate the space consumed by these partitions as
follows:
• Where P is the partition size (for example, 4 week's worth of data), R the retention period,
T the time-to-close wait, and F the purge frequency (all using the same units, such as
days or weeks).
• Where N is the number of active-active Oracle RAC nodes. If you use a single instance
database, N=1.
• Where S is the space consumed by a single partition. Refer to the ""All-In" Order Volume"
section for estimating S.
• To estimate the number of partitions that are eligible for purge: F / P x N
• To estimate the number of partitions that are ineligible for purge: (T + R) / P x N
• To estimate the total number of partitions that have never been purged: (F + T + R) / P x
N
• To estimate the total space consumed by these partitions: (F + T + R) / P x N x S
• If you use a single instance database and the partition size is the same as the purge
frequency, the above formula can be simplified: (P + T + R) / P x S
Oracle strongly recommends that you increase your estimate by some contingency based on
your ability to quickly add storage if necessary, reschedule a failed purge, and other risks (for
example, by 25% or the amount of space reclaimed by each purge).
Example:
Figure 6-8 is an example of estimating minimum storage requirements.
• Partition size (P): 2 weeks' worth of orders
• Purge frequency (F): Biweekly
6-29
Chapter 6
Sizing Partitions
6-30
Chapter 6
Sizing Partitions
• Populate a partition with a representative mix of orders and states for the same period. If
that period is too long due to storage or time constraints, you may use a shorter period.
However, it is important that you use a substantial data set to improve the accuracy of
estimates - typically at least one day's worth of orders.
• Use the om_part_maintain.estimate_ptn_purged_space procedure to estimate the space
that would be reclaimed if you purged the entire partition, and extrapolate for various
partition sizes. For more information, see "Estimating Partition Disk Space (Online or
Offline)."
Retention Policy
The retention policy is one of the most important sizing factors, yet you have the least control
over it because normally it is determined by the business. The retention period starts
counting after an order is closed. Therefore, in order to determine when an exhausted
partition will be both eligible and cost-effective to purge, add the retention period to the "time-
to-close" wait period.
Example:
Figure 6-9 shows the impact of the retention period. Decreasing the retention period by 2
weeks requires less storage, equal to the space consumed by a single partition. This is
because each partition is sized to contain 2 weeks' worth of orders. Similarly, if you increased
the retention period to 6 weeks, you would consume additional space for 2 weeks' worth of
orders and you would have to maintain an extra partition.
6-31
Chapter 6
Sizing Partitions
Time-to-Close Wait
Time-to-close wait is the period until "most" orders in the partition are closed. The
objective is to wait until a partition purge is cost-effective. As a starting point, you
should wait until at least 98% of the orders are closed. Your final decision is a
trade-off between storage and purge performance (duration of outage), as discussed
below.
The first concern is the impact to purge performance of the time-to-close wait. When
you purge a partition, retained orders are temporarily copied into so-called backup
tables, and they are later restored (copied back) into the partition. These copy
operations could add significant downtime to the maintenance window depending on
the volume of retained data, your hardware, and the degree of parallelism. You can
decrease the operation time by increasing parallelism. In general, you should aim to
maximize resource utilization in order to improve purge performance. However,
increased parallelism comes with more overhead. For example, you might find out that
if you double the parallelism, the operation time is reduced by only one third. And there
is a tipping point where parallelism overhead outweighs gains. Therefore it is
recommended that you tune the degree of parallelism and evaluate the performance of
purge operations on production quality hardware - ideally of the same caliber as your
production hardware. For additional information about tuning, see the "Performance
Tuning" section.
It is easier to use percentages in your initial time-to-close calculations (for example,
the time to close 98% of orders). Performance tests help to nail it down to absolute
numbers. For example, suppose your acceptable range for copying retained orders
(backup and restore) is 15-30 minutes, and that according to performance tests this is
enough to retain 10000-20000 orders. In order to allow for partition consolidations, you
could use 10000 in your calculations, which also provides a safety buffer. For example,
if the partition size in one of your scenarios is one million orders, 10000 orders is 1%.
In this case, time-to-close is the time it takes to close 99% of the orders.
6-32
Chapter 6
Sizing Partitions
With regard to storage, a shorter time-to-close wait is better. Decreasing the time-to-close
wait alone by X days is the same as decreasing the retention period alone by X days or
decreasing both by X days in total.
Example:
Figure 6-10 shows the impact of the time-to-close wait period. Each partition is sized to
contain 2 weeks' worth of orders. All things being equal, decreasing this wait by 2 weeks
requires less storage, equal to the space consumed by a single partition. Alternatively, the
number of retained orders increased five times to about 10%, which might add several
minutes to the duration of a maintenance window. You must decide whether these storage
savings justify a longer outage (perpetually).
Oracle RAC
As explained in the section "Support for Active-Active Oracle RAC," if you switch from a
single database to Oracle RAC with N active-active nodes, the number of partitions increases
N-fold whereas the actual number of order Ids stored in a partition decreases N-fold. This
means that:
• The space consumed by N partitions is about the same as that consumed previously by a
single partition.
• You do not necessarily need to change the partition size, storage capacity, the purge
frequency, or any other purge-related policies.
• During a purge window, you must purge N partitions instead of one and consolidate them
N-to-1.
Consolidating partitions might sound contrary to the way OSM is designed to use partitions
on active-active Oracle RAC. However, it is unlikely that order processing on a consolidated
6-33
Chapter 6
Sizing Partitions
partition will experience cluster waits. The number of retained orders is normally small,
the consolidated order Ids are far apart, and there is typically little activity on those
orders. If a significant increase in cluster waits proves to be the result of partition
consolidation, consider avoiding consolidation when a partition is purged for the first
time.
Another concern is that a large number of physical partitions could potentially cause
performance issues, as discussed in the "Pitfalls of Partitioning" section.
Using Oracle RAC in active-passive mode is similar to using a single instance
database. The only difference is that order creation might be switched to another
partition and then back to the original in the events of failover and failback, although a
switch might not occur right away or even not at all. This means that you may end up
with a sparsely populated partition, which at some point could be consolidated with
another partition.
Example:
Figure 6-11 compares a single instance database to active-active Oracle RAC.
Specifically, OSM is configured to use two nodes in active-active mode. The Oracle
RAC database may have additional nodes that are either not used by OSM or they are
used in passive mode. The partition size, time-to-close wait, retention period and
purge frequency are the same. However, OSM uses twice as many partitions on
Oracle RAC, which are half-full when they are exhausted (half of the order Ids are
skipped). This means that you must purge and consolidate two partitions instead of
one to reclaim the same amount of space.
Purge Frequency
As explained in "Estimating Storage," if you follow a regular purge schedule, the
number of partitions purged during each maintenance window is F/P for a single
6-34
Chapter 6
Sizing Partitions
instance database, where F is the purge frequency (for example, 30 days) and P is the
partition size (for example, 30 days' worth of orders). As a starting point, it is recommended
that you size each range partition to contain as many orders as will be purged in one purge
maintenance window, that is, F=P. As you evaluate scenarios for different purge frequencies,
adjust the partition size accordingly so that F=P. If the partition size is less than the purge
frequency, you will have to consolidate partitions N-to-1, where N= F/P. This will add some
extra time to purge maintenance (normally measured in minutes). You might do this is if you
are uncomfortable using large partitions. In this case, if you like a constant (predictable)
consolidation ratio, choose the partition size so that N=F/P is an integral number.
A desire to purge as infrequently as possible is likely limited by the storage capacity and/or
the administrative burden of managing a very large schema (whatever your criteria may be
for "large"). Fortunately, you can often decrease the purge frequency N-fold with a relatively
small increase in storage capacity. For simplicity, consider a single instance database and
assume that the purge frequency is the same as the partition size. As explained in Estimating
Storage, in this case you can use the following formula to estimate the storage consumed by
partitions that have never been purged, where P is the partition size (for example, in days), T
is the time-to-close wait, R is the retention period, and S is the space consumed by a single
partition:
(P + T + R) / P x S = (1 + (T + R) / P) x S
Based on this formula, if the period T + R is large compared to P, you could double or triple
the partition size and the purge frequency with a relatively small increase in storage. This is
demonstrated with the following example.
6-35
Chapter 6
Sizing Partitions
Figure 6-12 Impact of Doubling Partition Size and Reducing Purge Frequency
to Half
6-36
Chapter 6
Sizing Partitions
uncomfortable with oversized partitions, and you can afford some occasional downtime
(for example, every 6 months or once a year), you can size partitions for infrequent
maintenance as discussed in "Sizing Range Partitions for Infrequent Maintenance."
• The partition sizing for a single instance database and active-active Oracle RAC is the
same. As explained in "Support for Active-Active Oracle RAC," if you switch from a single
database to Oracle RAC with N active-active nodes, the number of partitions will increase
N-fold, whereas the actual number of order Ids stored in a partition will decrease N-fold.
If both the retention period and the maximum order lifetime are relatively short, there is more
flexibility in sizing partitions. You do not necessarily need oversized partitions because you
can drop them online in a relatively short period after they are exhausted.
6-37
Chapter 6
Online vs. Offline Maintenance
If the current partition is exhausted and the previous partition has still long-lived open
orders that are expected to remain open for much longer, you might have to schedule
a maintenance window to purge that partition using partition-based purge.
Example:
Suppose 98% of the orders close within 1 week, the retention period is 4 weeks, and
you have a maintenance window every 24 weeks. You want the first partition to be
exhausted after 19 weeks or less (24 - 1 - 4). Using 1 week as contingency, 18 weeks
is a good size. After that, the partition size is 24 weeks (the same as the purge
frequency), everything else being the same.
6-38
Chapter 6
Managing Order Data
Table 6-2 summarizes which operations can be performed online. Performing operations
offline is always faster than online. If a procedure supports online maintenance operation, it is
recommended only under low volume. In particular, performing partition management
operations online causes lock contention in the database and waits, such as cursor: pin S
wait on X. Under high volume, such contention could result in severe performance
degradation, transaction timeouts, and even order failures.
6-39
Chapter 6
Managing Order Data
• "Estimating Partition Disk Space (Online or Offline)": Explains how to estimate the
amount of space that could be reclaimed during a maintenance window, the
amount of space consumed by a partition, or the average order size in a partition.
6-40
Chapter 6
Managing Order Data
Example (Add 2nd partition): Consider a new installation with range_partition_size equal
to 100,000. The upper bound of the partition created by the installer is 100001 (the upper
bound of a partition is non-inclusive). The following statement adds a second partition with
upper bound 200001 on tablespace OSMTS.
execute om_part_maintain.add_partition('OSMTS');
Example (Add Nth partition, N > 2): The following statement adds three more partitions on
the same tablespace as the most recently added partition with upper bounds 300001, 400001
and 500001.
execute om_part_maintain.add_partitions(3);
6-41
Chapter 6
Managing Order Data
In the first stage, order purge scans OM_ORDER_HEADER and inserts the order Ids
of all orders that satisfy the given purge criteria into the
OM_ORDER_ID_FOR_PURGE staging table. You can restrict the scope of the search
to an order ID range, for example to the latest partition(s) where new orders are
created.
In the second stage, the selected orders are purged in parallel using the
dbms_parallel_execute package. More precisely:
• Order purge splits the work into smaller pieces by dividing the data blocks of
OM_ORDER_ID_FOR_PURGE into chunks. Then it spawns N database jobs to
purge the chunks in parallel, where N is the degree of parallelism (possibly 1).
Each job processes one chunk at a time by deleting one order at time,
automatically committing every few deletes. In the event of error (for example, a
deadlock), the job continues with the next chunk.
• After finishing the processing of all chunks, order purge retries processing of any
failed chunks until either all chunks are processed successfully (all orders in the
chunk are purged) or a pre-defined retry threshold is reached.
• At the end of a successful purge, order purge clears
OM_ORDER_ID_FOR_PURGE.
This approach ensures that a) an order is either purged entirely or not at all b) a purge
may succeed partially even in the event of errors and c) the purge handles recoverable
errors, such as deadlocks.
For performance reasons, the staging table is hash partitioned. To minimize contention
among purge jobs, OM_ORDER_ID_FOR_PURGE must have the same number of
partitions as the number of hash sub-partitions in an OM_ORDER_HEADER range
partition. Otherwise, order Ids in different OM_ORDER_ID_FOR_PURGE blocks that
are processed by different purge jobs might be stored in the same
OM_ORDER_HEADER block.
6-42
Chapter 6
Managing Order Data
6-43
Chapter 6
Managing Order Data
6-44
Chapter 6
Managing Order Data
• You cannot drop a partition if it has open orders. However, you can purge a partition if the
number of retained orders does not exceed a configurable threshold. In this case, OSM
copies retained orders to the so-called backup tables prior to the partition exchange, and
restores them afterwards.
• To purge partitions you must create exchange tables first. This must be done once after a
new installation and subsequently after schema changes. Exchange tables are not used
when dropping partitions or when using row-based order purge.
• If you have only one partition, you cannot drop it unless you add another partition first.
This restriction does not apply to purging.
• You can only drop partitions when the OSM is offline, so you must plan for downtime. If a
partition does not have any open orders and all orders satisfy the purge criteria, you
might be able to purge the partition online. However, note that purging online is slower
than offline, and results in increased contention and lower throughput because DDL
operations lock the entire table in exclusive mode. Therefore, you should only purge
online during the lowest volume hours and only after you have tested it successfully in
your environment.
6-45
Chapter 6
Managing Order Data
locks acquired on the exchange tables do not block other processing). For
more information see "Managing Exchange Tables for Partition-Based Order
Purge."
Purging Entire Partitions That Do Not Contain Open Orders (Online or Offline)
You can entirely purge partitions that do not contain any open orders with the
om_part_maintain_purge_partitions procedure. If purging online, this procedure
exchanges each partition you want to purge with empty purge table(s), effectively
swapping the data of that partition out of the table. If purging offline, partitions that can
be entirely purged are dropped, unless you disallow it. However, the partition where
retained orders are consolidated is always exchanged, even if that partition has no
retained orders itself.
• Purging partitions online causes lock contention in the database and waits, such
as cursor: pin S wait on X. Under high volume, such contention could result in
severe performance degradation, transaction timeouts, and even order failures.
• You can disallow dropping partitions by passing a_drop_empty_ptns=false.
However, this prevents partitions from being consolidated and affects purge
performance.
• The name and upper bound of an exchanged partition do not change.
• If the exchanged partition and the purge table are on different tablespaces then
after the exchange the two tablespaces are swapped (there is no movement of
data).
• If the parameter purge_policy_purge_related_orders_independently is set to
'N' and the partition contains orders that are associated directly or indirectly with
orders that do not satisfy the purge criteria (for example, open follow-on orders in
6-46
Chapter 6
Managing Order Data
a different partition), the partition cannot be purged entirely. For more information, see the
purge policy section in "Purging Related Orders Independently."
For more information, see om_part_maintain.purge_partitions.
6-47
Chapter 6
Managing Order Data
Figure 6-16 Purging Partitions That Contain Orders That Must be Excluded
from Purging
Figure 6-17 to Figure 6-21 show how purge_partitions purges these partitions step
by step:
1. It copies the orders that do not satisfy the purge criteria from
P_000000000000700001 and P_000000000000800001 into the backup tables.
2. It purges partitions P_000000000000600001, P_000000000000700001 and
P_000000000000800001 by exchanging them with the purge tables.
3. It drops partitions P_000000000000600001 and P_000000000000700001, which
are now empty.
4. It restores the retained orders from the backup tables into partition
P_000000000000800001.
5. (Optional) It purges the purge tables and continues the same process for any
remaining partitions. This is possible only if you allowed it to purge the purge
tables. Otherwise, it cannot proceed because the purge capacity is exhausted.
6-48
Chapter 6
Managing Order Data
6-49
Chapter 6
Managing Order Data
Figure 6-20 Step 4: Restore the Retained Orders from the Backup Tables
6-50
Chapter 6
Managing Order Data
Figure 6-21 Step5: (Optional) Purge the Purge Tables to Reclaim Space
When you drop a partition, all its data is deleted and storage is immediately reclaimed. You
can use the stored procedure om_part_maintain.drop_partitions to drop partitions that contain
orders with order IDs within a specified range if you no longer require the order data they
6-51
Chapter 6
Managing Order Data
contain and they have no open orders. If the schema contains only a single partition
then Oracle Database does not allow you to drop it.
If the parameter purge_policy_purge_related_orders_independently is set to 'N'
and the partition contains orders that are associated directly or indirectly with orders
that do not satisfy the purge criteria (for example, open follow-on orders in a different
partition), the partition cannot be dropped. For more information, see the purge policy
section in "Purging Related Orders Independently."
Because global indexes become unusable when partitions are dropped, this procedure
also rebuilds unusable indexes and index partitions. This can be done in parallel.
For more information, see "om_part_maintain.drop_partitions (Offline only)."
Example:
Consider an OSM schema with partitions P_000000000001000001,
P_000000000002000001, P_000000000003000001 and so on. If
P_000000000003000001 has open orders then the following statement will drop only
P_000000000001000001 and P_000000000002000001.
execute om_part_maintain.drop_partitions(4000000);
6-52
Chapter 6
Managing Order Data
These commands must be run only offline, when OSM is in a maintenance window. The
commands may take several minutes to run.
Note:
Setting purge_policy_purge_related_orders_independently to N may add
several minutes to the time it takes to purge or drop a partition.
When this policy is disabled, an order with related orders can be purged only if all directly and
indirectly related orders are ready to purge, that is they satisfy the purge criteria (for example,
a_delete_before and a_order_states). However, the order IDs of the related orders may be
within a different partition, or even outside the given purge range.
Note:
Currently this policy is supported by partition-based purge only.
6-53
Chapter 6
Managing Order Data
Figure 6-23 shows some of the orders in partition P_000000000000300001 and the
related orders:
• Order 291001 is purged because it satisfies the purge criteria and has no related
orders.
• Order 291002 satisfies the purge criteria and is related to 391002, which is
amended by 491002. Both 391002 and 491002 satisfy the purge criteria and are
therefore ready to purge although they are outside the given purge range.
Therefore order 291002 is purged even though 391002 and 491002 are not.
• Order 291003 is retained because it is indirectly related to 491003, which has a
completion date that is after the specified date.
• Order 291004 is retained because it is indirectly related to 491004, which is open.
The following types of relationships are considered when looking for related orders:
• Successor Orders
• Predecessor Orders
• Amendment Orders
• Base Orders (for amendments)
Keep the following in mind when
purge_policy_purge_related_orders_independently is set to N:
• Both direct and indirect relationships are considered.
• It does not matter if the related orders are within the range of order IDs being
purged; it matters only whether they match the purge criteria (a_delete_before
and a_order_states parameters).
• When purging partitions online, if any order in a partition has a related order that
does not match the purge criteria, the partition cannot be purged. Purging online
requires that there are no orders retained in the partition.
• When dropping partitions, if any order in a partition has a related order that does
not match purge criteria, the partition cannot be dropped. Dropping partitions
requires that there are no orders retained in the partition.
6-54
Chapter 6
Managing Order Data
• If the total number of orders to be retained exceeds the threshold defined by the
parameter xchg_retained_orders_thres (default 10000), the partition is not purged. This
includes orders that satisfy the purge criteria but they must be retained because they are
related to orders that do not satisfy that criteria.
Note:
You do not manually run the om_new_purge_pkg.purge_cartridge_orders
package. Design Studio runs this package when the
PURGE_ORDER_ON_UNDEPLOY cartridge management variable is set to
true and you undeploy a cartridge. Design Studio does not run this package
when the FAST_CARTRIDGE_UNDEPLOY cartridge management variable is
set to true when you undeploy a cartridge.
OSM assigns each purge operation a unique purge ID that OSM associates with all audit
records. You can monitor in-progress purges, review past purges, and analyze purge
performance using the following views:
• " OM_AUDIT_PURGE_LATEST": This view returns information about the latest order
purge.
• "OM_AUDIT_PURGE_ALL": This view returns information about all order purges.
You must set the nls_date_format database initialization parameter for queries to return the
time portion in audit views that have DATE datatype columns. For example:
alter session set nls_date_format = 'DD-MM-YYYY HH24:MI:SS';
6-55
Chapter 6
Managing Order Data
Audit Tables
In most cases, the purge audit views provide sufficient data; however, you may need to
query the underlying tables. For example you may need to review the orders that were
purged, or the purge criteria, or both for troubleshooting purposes. OSM stores audit
records in these tables:
• "OM_AUDIT_PURGE": This is the main audit table. It stores the operation name,
status, critical dates and other data.
• "OM_AUDIT_PURGE_ORDER": Stores a synopsis of each purged order with a
timestamp.
• "OM_AUDIT_PURGE_PARAM": Stores the purge criteria, the parameters supplied
to the purge procedure, and a snapshot of relevant session and configuration
parameters when the purge was started.
By default, OSM retains the audit data for at least 90 days. The minimum retention
period is specified by the purge_audit_retention_days configuration parameter in the
om_parameter table. Order purge procedures purge obsolete audit records
automatically before adding new audit records.
If you have partitioned the OSM schema, then OSM partitions the purge audit tables
on a monthly basis. OSM manages the audit partitions automatically. The first order
purge operation in a month automatically adds a new partition for the month. Order
purge procedures drop partitions with obsolete audit records automatically before
adding new audit records.
6-56
Chapter 6
Managing Order Data
For example, if the purge capacity is 3 then OSM creates 3 sets of purge tables as follows:
• If an OSM table is range-hash partitioned, OSM creates 3 hash-partitioned purge tables.
• If an OSM table is range partitioned, OSM creates 3 non-partitioned purge tables.
• If an OSM table is reference-partitioned, OSM creates 3xN non-partitioned purge tables,
where by default N is the number of hash sub-partitions of the oldest
OM_ORDER_HEADER partition. You can override the default N when you setup the
exchange tables.
The format of a purge table name is XCHG_OM_PRG_p$xchg_table_id$r, where:
• The XCHG_OM_PRG_ allows quick identification of purge tables.
• p is sequence number between 1 and the purge capacity, referred to as the logical
exchange partition (formatted to 3-digits 001 to 999). Each partition to be exchanged is
mapped to a logical exchange partition. This means that the maximum supported purge
capacity is 999. This is the first generated component so that purge table names are
grouped by partition when sorted.
• xchg_table_id is an OSM-generated sequence ID for each partitioned table, called the
exchange table ID (formatted to 3 digits). OSM stores the exchange table IDs in the
om_xchg_table table when OSM creates exchange tables. OSM purges this table when
it drops exchange tables. You do not need to know the exchange table IDs.
• r is a 3-digit suffix that identifies which reference partition is exchanged when the table is
reference partitioned; otherwise this value is omitted. r is referred to as the reference
partition position because reference partitions are exchanged in order based on their
position.
Example:
Figure 1 shows how OSM maps OM_ORDER_HEADER partitions to exchange tables when
purging partitions. OSM maps OM_ORDER_HEADER to exchange table ID 001. All range-
hash partitioned tables and the corresponding exchange tables have 64 hash partitions. The
purge table capacity is 2, for example, there are two purge tables for OM_ORDER_HEADER.
Assuming that the purge tables are empty, OSM can purge partitions
P_000000000008000000 and P_000000000009000000 by exchanging them with exchange
tables having logical exchange partitions 001 and 002, respectively.
6-57
Chapter 6
Managing Order Data
6-58
Chapter 6
Managing Order Data
Example
If you want to exchange 3 partitions without having to purge exchange tables, run the
om_part_maintain.setup_xchg_tables procedure as shown below. The procedure creates
one set of backup tables and 3 sets of purge tables.
execute om_part_maintain.setup_xchg_tables(3);
6-59
Chapter 6
Managing Cartridges
Managing Cartridges
The main components to a deployed cartridge are:
• The static cartridge metadata that is populated in the OSM database when the
cartridge is deployed or redeployed. This data does not grow or change when
orders are created or processed. Cartridge metadata is loaded into the OSM
server at startup and re-loaded when cartridges are deployed.
• The dynamic order data that is populated in the OSM database whenever an order
is created and as it is being processed.
6-60
Chapter 6
Managing Cartridges
Note:
OSM does not create ear files for automation plug-ins. The WebLogic server
console does not display automation plug-in ear files. Use the console logs to
debug issues.
Your primary goals should be to minimize the memory needs and startup time of OSM and to
deploy, redeploy, and undeploy cartridges quickly online. Because cartridge metadata
consumes relatively little space in the database, purging cartridge metadata is not a major
concern.
Cartridge metadata consumes memory resources and takes time to initialize on startup. You
can minimize the memory needs and startup time of OSM by undeploying old cartridges that
are no longer required from the run-time production environment.
To undeploy and redeploy cartridges quickly online, use Fast Undeploy instead of
conventional undeploy.
6-61
Chapter 6
Configuration Parameters
To purge all cartridges that were undeployed using Fast Undeploy, use the following
statement:
exec om_cartridge_pkg.drop_obsolete_cartridges;
You can run this procedure when OSM is online or offline. This procedure does not
purge any cartridges associated with orders and it does not purge any component
cartridges unless all associated solution cartridges are also selected for purge. The
DBMS output displays the cartridges that were purged and, for those cartridges with
an UNDEPLOYED status but not purged, the reason the procedure did not purge the
cartridge. For more information about DBMS output, see "DBMS Output."
To purge a single undeployed cartridge, use the following statement:
exec om_cartridge_pkg.drop_cartridge(cartridge_id);
Configuration Parameters
The following configuration parameters affect partition maintenance operations. These
parameters are defined in the om_parameter table.
Note:
You can override the parameters marked with (*) for a specific partitioning
realm in the partitioning realm configuration file. OSM persists partitioning
realm-specific parameters in the om_partitioning_realm_param table. For
more information, see "Partitioning Realms."
6-62
Chapter 6
Configuration Parameters
– purge_policy_time_to_close_wait
– purge_audit_retention_days
– purge_commit_count
• deferred_segment_creations (Oracle Database initialization parameter)
range_partition_size
This parameter is present in both the om_partitioning_realm_param table and
om_parameter table. The value in om_partitioning_realm_param is specific to a
partitioning realm and takes precedence over the value in table om_parameter. This
parameter specifies the size of new partitions.
The initial value in the om_parameter table for this parameter is specified during installation.
You can change it with the following SQL statement, where N is the new value (for example
100000):
update om_parameter
set value = N
where mnemonic = 'range_partition_size';
commit;
Updates to this parameter do not affect existing partitions. The upper bound of a new partition
is the greatest partition upper bound plus the value of this parameter.
The value of this parameter in table om_partitioning_realm_param is inserted and updated
by changes in partitioning realm configuration xml. Refer the Partitioning Realm section for
details.
subpartitions_number
Specifies the number of hash sub-partitions. You choose the initial value of this parameter
during installation. You can change it with the following SQL statement, where N is the new
value (for example, 32).
update om_parameter
set value = N
where mnemonic = 'subpartitions_number';
commit;
Updates to this parameter do not affect existing partitions. If you change this parameter and
you use om_part_maintain.purge_partitions for purging, you must re-run
om_part_maintain.setup_xchg_tables when it is time to purge partitions that were added
after the change. This is because the number of hash partitions of the purge tables must
match the number of hash sub-partitions of the range partitions to be purged.
default_xchg_capacity
Specifies the default purge capacity if om_part_maintain.setup_xchg_tables is called with
an unspecified capacity. If it is not configured, the default is 3.
6-63
Chapter 6
Configuration Parameters
xchg_retained_orders_thres
If the number of orders to be excluded from purging in a partition exceeds this
threshold, the partition cannot be purged for performance reasons. The default is
10000. You can override the default in the om_parameter table.
degree_of_parallelism
Specifies the default degree of parallelism for statements that are run in parallel. It
applies to queries, DML, and DDL statements. However, the degree of parallelism for
rebuilding indexes is configured by the degree_of_parallelism_rebuild_indexes and
degree_of_parallelism_rebuild_xchg_indexes parameters. If this parameter is not
specified, the default degree of parallelism is 4.
This parameter is also used for recreating global partitioned indexes when the
RECREATE GLOBAL policy is used. However, the degree of parallelism for rebuilding
index partitions is configured by the degree_of_parallelism_rebuild_indexes and
degree_of_parallelism_rebuild_xchg_indexes parameters. For more information,
see "purge_policy_rebuild_unusable_indexes." You can use the
"om_part_maintain.set_dop (Online or Offline)" procedure to set this parameter.
For more information, see "Parallel Execution."
degree_of_parallelism_rebuild_indexes
Specifies the default degree of parallelism for rebuilding index partitions of OSM tables
except exchange tables. If this parameter is not specified, the default degree of
parallelism is 2. This is less than the default value for degree_of_parallelism because
you cannot rebuild an entire partitioned index with a single statement. You must rebuild
each partition or sub-partition, which contains only a fraction of the data. Therefore the
overhead of increased parallelism may have negative impact on rebuild performance.
For example, performance tests might show that an optimal value for
degree_of_parallelism is 32 whereas the optimal value for
degree_of_parallelism_rebuild_indexes is only 4.
You can use the "om_part_maintain.set_dop_rebuild_indexes (Online or Offline)"
procedure to set this parameter.
The degree of parallelism for rebuilding indexes of exchange tables is configured with
the degree_of_parallelism_rebuild_xchg_indexes parameter.
degree_of_parallelism_rebuild_xchg_indexes
Specifies the default degree of parallelism for rebuilding index partitions of exchange
tables. If this parameter is not specified, the default degree of parallelism is 1. This is
because you cannot rebuild an entire partitioned index with a single statement. You
must rebuild each partition or sub-partition, which contains only a fraction of the data.
Because the size of exchange indexes is usually small rebuilding them serially is
usually faster.
You can use the "om_part_maintain.set_dop_rebuild_xchg_indexes (Online or Offline)"
procedure to set this parameter.
6-64
Chapter 6
Configuration Parameters
purge_job_class
This parameter in the om_parameter table specifies the class for purge jobs. A database job
must be part of exactly one class. The default value is DEFAULT_JOB_CLASS, which is also
the default database job class. If your database is Oracle RAC, jobs in the
DEFAULT_JOB_CLASS class can run on any node.
If you use a partition purge strategy, restricting purge jobs to a single node significantly
improves performance. Specifically, if the jobs that restore retained orders run on all nodes,
cluster waits could account for 40% or more of the database time. Cluster waits increase with
the degree of parallelism and the number of nodes. You can eliminate cluster waits by
restricting job execution on a single node as follows:
1. Create a database service, for example, OSM_MAINTAIN, with a single preferred node
and any number of available nodes. Refer to Oracle Database documentation for
instructions about how to create a service using Oracle Enterprise Manager or srvctl.
2. Create a job class, for example, OSM_MAINTAIN, and associate it with the new service:
exec dbms_scheduler.create_job_class(
'OSM_MAINTAIN', service => 'OSM_MAINTAIN');
parallel_execute_chunk_size
This is an advanced parameter that specifies the chunk size for parallel execution using jobs.
For more information, see "Tuning parallel_execute_chunk_size."
partition_auto_creation
This parameter in the om_parameter table specifies whether OSM is enabled to add a
partition automatically when a new order ID does not map to any partition. Valid values are Y
(enabled) and N. Oracle strongly recommends that you plan to add partitions manually and
disable automatic creation for all production and performance environments, especially if you
use Oracle RAC. Adding partitions online causes high contention in the database, resource
busy exceptions and transaction timeouts that could result to failed orders and instability of
OSM (especially during a busy period).
6-65
Chapter 6
Configuration Parameters
purge_policy_rebuild_unusable_indexes
This parameter in the om_parameter table specifies the default policy for rebuilding
unusable indexes. Possible values are:
• om_part_maintain.c_rebuild_idx_recreate_global (RECREATE GLOBAL): This
means that the preferred method to rebuild a global partitioned index that became
unusable after a partition maintenance operation is to drop and recreate the entire
index. This is the default, unless the global index is not partitioned, it supports a
unique constraint, or OSM is offline. Recreating a global partitioned index scans
the table only once and it can be done efficiently with a high degree of parallelism,
so it is more efficient and much faster than rebuilding each index partition
separately. The default degree of parallelism for recreating global indexes is
specified by the degree_of_parallelism parameter.
• om_part_maintain.c_rebuild_idx_rebuild (REBUILD): This means that the
preferred method to rebuild global partitioned indexes is one partition at a time
using ALTER INDEX REBUILD PARTITION. The default degree of parallelism for
rebuilding index partitions is specified by the
degree_of_parallelism_rebuild_indexes parameter.
purge_policy_purge_related_orders_independently
This parameter in the om_parameter table specifies whether orders should be purged
independently of any related orders they may have. Valid values are Y (purge
independently is enabled) and N (purge independently is disabled). By default, orders
are purged independently. For more information, see the purge policy section in
"Purging Related Orders Independently."
Note:
Setting purge_policy_purge_related_orders_independently to N may add
several minutes to the time it takes to purge or drop a partition.
purge_policy_consolidate_partitions
This parameter in the om_parameter table specifies the number of partitions to
consolidate into a single partition when purging. Valid values are between 1 and 10
and the default value is 3. For example, a value of 5 means the purge procedure can
combine the retained orders of up to 5 successive partitions into a single partition and
drop the other 4 partitions.
In order for partitions to be consolidated, the following conditions must be satisfied:
• Partitions can be dropped (argument a_drop_empty_ptns is true)
• Purging is done offline (argument a_online is false)
• Purge capacity is not exhausted
6-66
Chapter 6
Configuration Parameters
purge_policy_time_to_close_wait
This purge policy can improve the performance of row-based purges and decrease purge
rate fluctuations. The policy specifies a delays time before beginning to purge eligible orders
so that the majority of the orders that were created on the same day are closed. The goal is
to decrease I/O. For example, if 80% of orders complete in 4 days and the remaining 20%
complete slowly over a much longer period, you could set purge_policy_time_to_close_wait
to 4.
Example 1 (temporal affinity disabled): In this example, the retention period is 10 days and
the row-based order purge runs daily. The purge procedure runs at 10:30 PM with the
a_order_states argument set to closed orders only (v_closed_orders) and the
a_delete_before argument set to sysdate-10 (the current date/time minus 10 days). This
purges all orders that were closed 10 days ago before 10:30 PM. If 60% of orders close
within the same day and 30% close on the next day before 10:30 PM then 90% of orders
close within 2 days.
If temporal affinity is disabled, closed orders are purged as follows. For simplicity, ignore the
10% of orders that are closed slowly over several days and therefore they are purged at a
slower rate.
• No orders are purged on days 1 to 10 because they are all either open or in the 10 day
retention period.
• Orders purged at 10:30 PM on day 11: All orders that closed before 10:30 PM on day 1
(60% of the orders created on day 1).
• Orders purged at 10:30 PM on day 12: All orders that closed before 10:30 PM on day 2
(60% of the orders created on day 2 and 30% of orders created on day 1).
• Orders purged at 10:30 PM on day 13: All orders that closed before 10:30 PM on day 3
(60% of the orders created on day 3 and 30% of orders created on day 2).
• …
• Orders purged at 10:30 PM on day N: All orders that closed before 10:30 PM on day
N-10 (60% of the orders created on day N-10 and 30% of orders created on day N-11).
Example 2 (temporal affinity enabled and 90% of the orders closed within 2 days): In
the previous example, if the purge_policy_time_to_close_wait=1 (1 day), purging would be
delayed by one day. 90% of the orders created on a day would be purged at the same time
as the orders that were created and closed on the same day. The purge procedure runs at
10:30 PM and the same a_order_states and a_delete_before settings are configured in the
same way as example 1. However, this configuration purges all orders that were created 11
days before 10:30 PM and were closed 10 days before 10:30 PM. The creation date criterion
is based on 1 day time-to-close wait and 10 days retention period.
• No orders are purged on days 1 to 10 because they are all either open or in the 10 day
retention period.
• No orders purged on day 11: The orders closed on day 1 are out of retention but they
have to wait an extra day.
• Orders purged at 10:30 PM on day 12: All orders created before 10:30 PM on day 1 and
closed before 10:30 PM on day 2 (90% of the orders created on day 1).
• Orders purged at 10:30 PM on day 13: All orders created before 10:30 PM on day 2 and
closed before 10:30 PM on day 3 (90% of the orders created on day 2).
• …
6-67
Chapter 6
Configuration Parameters
• Orders purged at 10:30 PM on day N: All orders created before 10:30 PM on day
N-11 and closed before 10:30 PM on day N-10 (90% of the orders created on day
N-11).
Example 3 (temporal affinity enabled and 90% of the orders closed within 3
days): This is similar to the previous example, except that we want a 2 day delay
instead of 1 (purge_policy_time_to_close_wait=2).
• No orders are purged on days 1 to 10 because they are all either open or in the 10
day retention period.
• No orders purged on day 11: The orders closed on day 1 are out of retention but
they have to wait.
• No orders purged on day 12: The orders closed on day 1 and 2 are out of retention
but they have to wait.
• Orders purged at 10:30 PM on day 13: All orders created before 10:30 PM on day
1 and closed before 10:30 PM on day 3 (90% of the orders created on day 1).
• Orders purged at 10:30 PM on day 14: All orders created before 10:30 PM on day
2 and closed before 10:30 PM on day 4 (90% of the orders created on day 2).
• Orders purged at 10:30 PM on day N: All orders created before 10:30 PM on day
N-12 and closed before 10:30 PM on day N-10 (90% of the orders created on day
N-12).
purge_audit_retention_days
This parameter in the om_parameter table specifies the minimum number of days to
retain purge audit data. The default is 90 days. OSM automatically purges the audit
data after the data exceeds this time limit. For more information, see "Auditing and
Monitoring Order Purges."
deferred_segment_creation
Oracle Database introduced deferred segment creation in 11gR2. If the
deferred_segment_creation initialization parameter is set to true (the default), it
forces the database to wait until the first row is inserted into a table/partition before
creating segments for that table/partition and its dependent objects. In general,
deferred segment creation saves disk space for unused tables/partitions. The main
benefit to OSM is that it minimizes the time it takes to create a partition. However, in
high volume deployments, especially on Oracle RAC, deferred segment creation can
lead to serious performance issues when the database is forced to create the deferred
segments of a partition in order to store new orders. This occurs when the previous
partition is exhausted. The result is high "library cache lock" waits that could last for an
extended period of time (frequently more than 30 minutes). In high volume
deployments, it is strongly recommended that you disable deferred segment creation.
To disable deferred segment creation, log in to the database as the SYS user and run
the following statements:
alter system set deferred_segment_creation=false scope=both sid='*';
execute dbms_space_admin.materialize_deferred_segments('<schema_name>');
6-68
Chapter 6
About PL/SQL API
purge_commit_count
Specifies how frequently each purge job issues a commit command. For example, the value
10 means that 10 orders are purged before a commit is done. Unless you perform extensive
performance purge tests to determine the optimal value for this parameter, Oracle
recommends that you leave it at the default value. If not present, the value 10 is used.
DBMS Output
It is strongly recommended that you spool DBMS output to a file, especially for partition
maintenance operations. The DBMS output includes valuable information for troubleshooting
and performance tuning, such as elapsed processing times and error traces.
Note:
The DBMS output is sent to the client at the end of the operation. Oracle Database
does not provide any mechanism to flush output during the procedure.
6-69
Chapter 6
About PL/SQL API
Table 6-4 and Table 6-5 that follow represent order states and pre-defined aggregate
order states that can be supplied to purge script. You can choose to supply order state,
pre-defined aggregate order state, or a custom aggregate state to purge script. Order
states and predefined aggregate order states are defined in the om_new_purge_pkg
package.
• Example of an order state to purge cancelling orders:
om_new_purge_pkg.v_cancelling_orders
• Example of a pre-defined aggregate order state to purge all closed and cancelled
orders:
om_new_purge_pkg.v_closed_or_cancelled_orders
• Example of a custom aggregate order state to purge failed and aborted orders:
om_new_purge_pkg.v_failed_orders +
om_new_purge_pkg.v_aborted_orders
Note:
While forming a custom aggregate state, ensure the following:
• Use only order state, but not the pre-defined aggregate order states.
• Do not use the same state twice.
Table 6-4 shows the order state constants and their corresponding values.
Constant Value
v_completed_orders 1
v_aborted_orders 2
v_not_started_orders 4
v_suspended_orders 8
v_cancelled_orders 16
v_wait_for_revision_orders 32
v_failed_orders 64
v_waiting_orders 128
v_in_progress_orders 256
v_amending_orders 512
v_cancelling_orders 1024
Table 6-5 shows the pre-defined aggregate order states and their corresponding
values.
6-70
Chapter 6
About PL/SQL API
Constant Value
v_closed_orders 3 (v_completed_orders + v_aborted_orders)
v_closed_or_cancelled_orders 19 (v_completed_orders + v_aborted_orders +
v_cancelled_orders)
v_not_running_orders 252 (v_waiting_orders + v_failed_orders +
v_wait_for_revision_orders + v_cancelled_orders +
v_suspended_orders + v_not_started_orders)
v_compensating_orders 1536 (v_cancelling_orders + v_amending_orders)
v_running_orders 1792 (v_compensating_orders + v_in_progress_orders)
v_open_orders 2044 (v_running_orders + v_not_running_orders)
v_all_orders 2047 (v_open_orders + v_closed_orders)
The a_delete_before parameter allows you to further narrow the purge criteria based on the
order timestamp (for example, you might want to retain closed orders for at least 30 days).
Table 6-6 shows which timestamp in the om_order_header table is compared to
a_delete_before depending on a_order_states and the order status.
Table 6-6 Order Purge Based on Timestamp, Order State, and Order Status
Parallel Execution
The om_part_maintain API performs many operations in parallel:
• Parallel queries and most DML and DDL statements are run in parallel using parallel
servers, which apply multiple CPU and I/O resources to a single database operation.
Examples include copying orders into the backup tables and rebuilding unusable indexes.
• Some operations are run in parallel using the dbms_parallel_execute package, which
divides work into chunks processed in parallel by database jobs. Row-based order purge
6-71
Chapter 6
About PL/SQL API
and the restore stage of purge_partitions are performed this way. If your
database is Oracle RAC, it is recommended that you create a database job class
to restrict job processing on a single node to eliminate cluster waits. For more
information, see "purge_job_class."
Procedures that support parallelism use the a_parallelism parameter, which allows
you to specify the desired degree of parallelism for those statements that can be run in
parallel.
The degree of parallelism can be:
• Greater than 1: Statements that can be run in parallel are run with the specified
degree of parallelism.
• 1: All statements are run serially.
• 0: The degree of parallelism is computed by the database and it can be 2 or
greater. Statements that can be run in parallel always run in parallel.
• Less than 0: The degree of parallelism is computed by the database and it can be
1 or greater. If the computed degree of parallelism is 1, the statement runs serially.
Indexes are always rebuilt in parallel.
If you leave a_parallelism unspecified, OSM uses the default parallelism configured
by these parameters:
• degree_of_parallelism
• degree_of_parallelism_rebuild_indexes
• degree_of_parallelism_rebuild_xchg_indexes
Concurrency Restrictions
Exchange table, partition management, and purge procedures acquire an exclusive
user lock to prevent concurrent processing of other management procedures, which
could result in unrecoverable errors. Each OSM schema uses a different lock
specifically for this package. The lock is released automatically at the end of operation.
The database also releases user locks automatically when a session terminates.
Specifically, the following procedures acquire an exclusive lock to prevent concurrent
operations:
• setup_xchg_tables
• drop_xchg_tables
• purge_xchg_prg_tables
• purge_partitions
• drop_empty_partitions
• drop_partitions
• add_partition and add_partitions
• equipartition
• purge_orders
• select_orders
• purge_selected_orders
6-72
Chapter 6
PL/SQL API Reference
• resume_purge
If you purge partitions, you must create exchange tables after a new installation and each
time you upgrade the schema. If the exchange tables are not up to date,
om_part_maintain.purge_partitions reports an error. If you only drop partitions, exchange
tables are not required.
This procedure first calls drop_xchg_tables to drop all existing exchange tables and reclaim
space. If a_force is false and an exchange table is not empty, it throws an exception. Upon
successful completion, it sets the sys$xchg_purge_capacity and sys$xchg_purge_seq
system parameters to the purge capacity and 1, respectively (in the om_parameter table).
The parameters are:
• a_xchg_purge_capacity: Specifies the exchange capacity in the range 0-999. If it is not
specified, it uses the value of the default_xchg_capacity parameter configured in the
om_parameter table. If default_xchg_capacity is not set, the default capacity is 3. If the
specified capacity is 0 then it creates backup tables but not purge tables. If the specified
or configured capacity is illegal, it throws an exception.
• a_tablespace: Specifies the tablespace where you want the exchange tables to be
created. If you do not specify it, the database default tablespace is used.
• a_force: Specifies whether existing exchange tables should be dropped even if they are
non-empty. If this is false and an exchange table is not empty, an exception is thrown. In
this case, exchange tables are left in an inconsistent state (new exchange tables are not
created but existing exchange tables might be partially dropped).
• a_subpartition_count_override: Specifies the number of hash partitions for exchange
tables. Oracle Database does not allow a range-hash partition to be exchanged with the
hash-partitioned table if the number of hash partitions of the range partition and the table
do not match. By default, the number of hash partitions of the exchange tables for
om_order_header is the same as the number of hash sub-partitions of the oldest
6-73
Chapter 6
PL/SQL API Reference
6-74
Chapter 6
PL/SQL API Reference
add_partitions(1, a_tablespace);
The upper bound of each new partition is the greatest partition upper bound plus the value of
the range_partition_size parameter. If found, the range_partition_size for the realm is used
(found in table om_partitioning_realm_param). If not found, the range_partition_size in
table om_parameter is used. The upper bound is used in the partition name. For example, if
the new partition's upper bound is 100,000, the partition name is P_000000000000100000
(always formatted to 18 characters).
This procedure inserts a new row into table om_order_id_block to represent the range of
order_seq_ids for the new partition. In addition to the order ID range, the order ID block
contains the status (for example, AVAILABLE for the newly added partition), the dbinstance
(-1 until the block changes to ACTIVE), and the partitioning realm associated with the block.
You must run this procedure offline. Running this procedure online causes high contention in
the database and transaction timeouts that could result in failed orders and instability of
OSM.
The parameters are:
• a_count: The number of partitions to add.
• a_tablespace: The tablespace for the new partitions. This procedure modifies the default
tablespace attribute of partitioned tables with the specified tablespace before adding
partitions. If you do not specify the tablespace or the input argument is null, each partition
is created on the default tablespace of the partitioned table (for example, on the same
tablespace as the most recently added partition).
• a_realm_mnemonic: The partitioning realm mnemonic (case-insensitive) that the new
partition belongs to. If the value is null, the partition is assigned to the default_order
realm. If the realm is disabled, an error occurs; you can ignore this error by entering true
in the a_force argument.
• a_force: If the value of this parameter is true, partitions can be added for disabled
realms. This is useful because partitioning realms are often created in a disabled state,
therefore partitions can be added for the realm before enabling the partitioning realm.
Dropping newly added partitions: If you want to drop several new partitions, perhaps
because you want to re-create them (for example, with a different number of hash sub-
partitions and/or on a different tablespace) or because you inadvertently added a large
number of partitions, you can drop those partitions that are still empty using
drop_empty_partitions.
6-75
Chapter 6
PL/SQL API Reference
6-76
Chapter 6
PL/SQL API Reference
6-77
Chapter 6
PL/SQL API Reference
Example (dropping newly added partitions): Suppose you want to drop several new
partitions, perhaps because you want to re-create them (for example, with a different
number of hash sub-partitions and/or on a different tablespace) or because you
inadvertently added a large number of partitions. Specifically, assume that you want to
drop partitions P_000000000000600001, P_000000000000700001 and
P_000000000000800001 shown in the table below.
6-78
Chapter 6
PL/SQL API Reference
• If run online:
– All of the orders are either closed (complete or aborted) or canceled.
– All of the contained orders satisfy the purge criteria specified by the a_delete_before
and a_order_states arguments.
– All of the contained order IDs are within the purge range specified by a_order_id_lt
and a_order_id_ge. The range of mapped order IDs does not need to be a subset of
the specified range. What matters is the range of actual order IDs.
• If run offline:
– Some or all of the contained orders satisfy the purge criteria specified by the
a_delete_before and a_order_states arguments.
– Some or all of the contained order IDs are within the purge range specified by
a_order_id_lt and a_order_id_ge. The range of mapped order IDs does not need to
be a subset of the specified range. What matters is the range of actual order IDs.
– The number of orders to be excluded from purging (for example, those orders that do
not satisfy the previous two conditions) does not exceed the threshold specified by
the xchg_retained_orders_thres parameter.
– The partition belongs to a partitioning realm with a purge strategy of PARTITION-
BASED or ANY. For more information, see the purge strategy section in "Partitioning
Realms."
Oracle recommends that you back up the OSM schema prior to running this procedure and
that you gather statistics after you finished purging.
If you run this procedure online, you might experience high contention due to exclusive locks
acquired by Oracle Database. Oracle recommends that you run this procedure either offline
or off-peak.
If you run this procedure offline, you can purge a partition that contains orders that do not
satisfy the purge criteria as long as the number of retained orders in that partition does not
exceed the threshold specified by the xchg_retained_orders_thres parameter. In this case,
the retained orders are copied to the backup tables prior to the exchange operation and they
are restored (copied again) into the partitioned tables after the exchange operation. Because
these are relatively expensive operations, the threshold ensures that they will complete in a
timely fashion. Both backup and restore are run in parallel as specified by the a_parallelism
argument.
If this procedure is run offline, it disables foreign keys. This is necessary when purging
partitions with retained orders. Disabling foreign keys is unsafe to do when the OSM
application is online as it can result in data integrity violations. Therefore disabling foreign
keys requires OSM be offline until they are re-enabled.
Partitions are purged one by one end-to-end, that is, from all partitioned tables. For example,
if you want to purge partitions P_000000000001000001, P_000000000002000001, and
P_000000000003000001 then P_000000000001000001 will be purged first from all
partitioned tables, then P_000000000002000001 and so on.
This procedure can consolidate retained orders from multiple partitions into a single partition,
to maximize reclaimed space, reduce the number of partitions, and minimize downtime. This
is done by purging successive partitions in iterations. The maximum number of partitions
consolidated in each iteration is limited by the parameter
purge_policy_consolidate_partitions. More precisely, this procedure purges successive
partitions that qualify for purging as follows:
6-79
Chapter 6
PL/SQL API Reference
1. Copies the orders that do not satisfy the purge criteria from those partitions into
the backup tables. This is a relative fast operation because it is performed in
parallel and the backup tables have few indexes and constraints.
2. Purges each partition entirely by exchanging it with purge tables. This is a fast
operation because EXCHANGE PARTITION only updates metadata in the data
dictionary.
3. Drops N-1 of those partitions. This is a fast operation because the partitions are
now empty.
4. Restores the retained orders from the backup tables into the Nth partition with their
order IDs unchanged. This is also performed in parallel using the
dbms_parallel_execute package. However, this step is slower than backup
because the partitioned tables have more indexes and constraints.
The EXCHANGE PARTITION operation is performed with the following options:
• INCLUDING INDEXES: This means that local index partitions or subpartitions are
also exchanged. This ensures that local indexes remain usable during the
exchange, for example, they do not have to be rebuilt.
• WITHOUT VALIDATION: By default, the exchange operation is performed WITH
VALIDATION, which means that Oracle Database returns an error if any rows in
the exchange table do not map into partitions or subpartitions being exchanged.
This check is unnecessary when the exchange table is empty.
• If this procedure is run online and the table has global indexes that enforce unique
constraints then the exchange is performed with the following options:
– UPDATE GLOBAL INDEXES: This means that global indexes are updated
during the exchange and therefore remain usable. Otherwise, unusable global
indexes that enforce unique constraints would result in ORA-01502
exceptions. (By default, unusable global indexes that do not enforce unique
constraints are ignored and therefore are not an issue – this is controlled by
the SKIP_UNUSABLE_INDEXES initialization parameter. Therefore, if a table
has no such global indexes or if this procedure is run offline, rebuilding
unusable global indexes is deferred for performance reasons.)
– PARALLEL: This means that global indexes are updated in parallel for
performance reasons. It does not alter the global indexes to parallel.
After each partition is purged end-to-end, the sys$xchg_purge_seq counter in the
om_parameter table increments to the next logical exchange partition. When the
logical exchange partition exceeds the purge capacity, this counter cycles to 1.
The procedure exits when:
• Time expires.
• It encounters a partition with a lower bound greater than or equal to the upper
bound of the specified range.
• The number of hash sub-partitions of the next om_order_header partition is
different than the number of partitions of the corresponding exchange table. The
number of hash partitions of each exchange table is the same as the same
number of hash sub-partitions of the oldest partition of the corresponding range-
hash partitioned table. If newer partitions have a different number of hash sub-
partitions (because you changed the subpartitions_number parameter) then you
will not be able to purge the newer partitions until you drop the older partitions and
re-run setup_xchg_tables.
6-80
Chapter 6
PL/SQL API Reference
All disabled constraints are re-enabled at the end (with NOVALIDATE for performance
reasons).
The parameters are:
• a_online: Specifies whether this procedure is being run online. If it is true, it ignores
partitions with open orders and partitions with orders that do not satisfy the purge criteria
(only entire partitions can be purged online).
• a_delete_before: Only orders with a timestamp older than this date and time are eligible
for purging. For more information, see "Specifying Purge Criteria."
• a_order_states: Only orders with one of these states are eligible for purging. By default,
only closed orders are eligible for purging. For more information, see "Specifying Purge
Criteria."
• a_order_id_lt and a_order_id_ge: If a_order_id_ge is not null then only orders with
order ID greater than or equal to this value are eligible for purging. If a_order_id_lt is not
null then only orders with order ID less than to this value are eligible for purging. If
a_order_id_lt is null, it will be defaulted to the non-inclusive upper bound of the latest
used partition. (This ensures that new empty partitions beyond the currently active
partition are not dropped accidentally.) If a partition contains both order IDs in this range
and outside this range then the partition cannot be purged unless the out-of-range orders
can be retained (for example, the purge is done offline and the total number of retained
orders in that partition does not exceed the threshold specified by the
xchg_retained_orders_thres parameter).
• a_stop_date: If it is not null then the procedure exits when the date and time are
reached. This is done on a best-effort basis, since a premature exit could leave data in
inconsistent state. The time is checked periodically. The elapsed time between checks
could be as high as the time it takes to purge as many partitions as the spare purge
capacity. Only non-critical deferrable operations are skipped when the time expires, such
purging exchange tables.
• a_drop_empty_ptns: Specifies whether empty partitions should be dropped. The default
is true, since dropping empty partitions is a fast operation. In this case, this procedure
can purge as many successive partitions at a time as the spare capacity, which reduces
the time it takes to restore orders and therefore downtime. If this is argument is false,
each partition to be purged must go through the backup-purge-restore process
separately.
• a_purge_xchg_prg_tables: Specifies whether exchange tables should be purged as
well. If it is true then it purges exchange tables, as long as time has not expired and at
least one partition was purged. This is relatively slow operation, so the default is false. In
this case, the number of partitions that can be purged by running this procedure once is
limited by the space purge capacity.
• a_parallelism: Specifies the degree of parallelism for backup and restore operations. If it
is null, it uses the parallelism configured by the degree_of_parallelism parameter. For
more information, see "Parallel Execution."
Exceptions: This procedure performs a number of checks to ensure it can proceed with
purge. If a check fails, it throws one of the following exceptions:
• ORA-20142: The schema is not equi-partitioned. Run the equi-partition procedure.
• ORA-20160: The schema is not partitioned. You can only use this procedure if your
schema is partitioned.
• ORA-20162: There are no exchange tables. Run the setup_xchg_tables procedure.
6-81
Chapter 6
PL/SQL API Reference
• ORA-20163: The exchange tables are not up-to-date. This means that the schema
has been upgraded after the exchange tables were created. Re-run the
setup_xchg_tables procedure.
• ORA-20166: There is another in-progress maintenance operation.
• ORA-20170: Failed to suspend database jobs.
• ORA-20171: The procedure was run with a_online=false and it detected that
OSM is running.
Example (purge all orders that were closed at least 180 days ago): Suppose you
want to purge all complete or aborted orders that were closed at least 180 days ago.
Assuming that most partitions contain some orders that do not satisfy these criteria,
you decided to run purge_partitions offline. You also want to defer dropping empty
partitions and purging the exchange tables until the system is restarted. This is how
you can do it:
begin
om_part_maintain.purge_partitions(
a_online => false,
a_delete_before => trunc(sysdate) - 180,
a_order_states => om_new_purge_pkg.v_closed_orders,
a_drop_empty_ptns => false,
a_purge_xchg_prg_tables => false,
a_parallelism => 4) ;
end;
Example (ignore old partitions that contain only a few orders): This example adds
to the scenario of the previous example. Assume that old partitions with non-inclusive
upper bound up to 5600000 contain a small number of orders that can be purged but
cannot be purged entirely (for example, because they still contain open orders).
Purging those partitions would be unproductive, since it could exhaust the exchange
capacity. Therefore you decided to use the a_order_id_ge parameter to ignore them
for now:
begin
om_part_maintain.purge_partitions(
a_online => false,
a_delete_before => trunc(sysdate) - 180,
a_order_states => om_new_purge_pkg.v_closed_orders,
a_order_id_ge => 5600000,
a_drop_empty_ptns => false,
a_purge_xchg_prg_tables => false,
a_parallelism => 4) ;
end;
This procedure purges the given partition entirely (all orders). The partition is not
dropped. The following two calls are equivalent, assuming that the partition size is
100000:
execute om_part_maintain.purge_entire_partition(
a_online => true,
6-82
Chapter 6
PL/SQL API Reference
execute om_part_maintain.purge_partitions(
a_online => true,
a_delete_before => om_const_pkg.v_no_date,
a_order_states => om_new_purge_pkg.v_all_orders,
a_order_id_lt => 400001,
a_order_id_le => 300001,
a_stop_date => null,
a_drop_empty_ptns => false);
Parameters:
• a_online: Specifies whether this procedure is being run online. If this parameter is true, it
ignores partitions with open orders and partitions with orders that do not satisfy the purge
criteria (only entire partitions can be purged online).
• a_partition_name: The name of the partition to purge.
• a_purge_xchg_prg_tables: Specifies whether exchange tables should be purged as
well. If this parameter is true, it purges exchange tables, as long as time has not expired
and at least one partition was purged. This is a relatively slow operation, so the default is
false. In this case, the number of partitions that can be purged by running this procedure
once is limited by the space purge capacity.
• a_purge_orphan_data: Specifies whether you want orphan data to be purged after the
partition is purged. The default is true. You may want to defer purging of orphan data if
you used om_part_maintain.backup_selected_ords to manually backup selected
orders, which you plan to restore with om_part_maintain.restore_orders.
This function estimates amount of disk space (in bytes) that is reclaimed by purging or
dropping partitions.
This function simulates running om_part_maintain.purge_partitions, therefore refer to the
purge partitions API reference for a description of the parameters, exit conditions, and
possible exceptions.
Example (estimate the space reclaimed by purging all orders that were closed at least
180 days ago):
declarebegindbms_output.put_line('Space Reclaimed (bytes): '||om_part_maintain.
estimate_ptn_purged_space( a_delete_before => trunc(sysdate) - 180,
a_order_states => om_new_purge_pkg.v_closed_orders)) ;end;
Example (estimate the space reclaimed by dropping partitions): The following example
shows how to estimate the space reclaimed by dropping all partitions with an upper bound
less than or equal to 300001. Note that the a_delete_before and a_order_states
parameters have been set to values that include all orders in the partition.
declarebegindbms_output.put_line('Space Reclaimed (bytes): '||om_part_maintain.
estimate_ptn_purged_space( a_delete_before => om_const_pkg.v_no_date,
a_order_states => om_new_purge_pkg.v_all_orders, a_order_id_lt => 300001)) ;end;
6-83
Chapter 6
PL/SQL API Reference
This procedure purges all exchange backup tables. Normally you do not need to run
this procedure because backup tables are purged automatically when all order data is
restored. The implementation runs TRUNCATE TABLE, so purged data cannot be
restored.If a_drop_storage is true, backup tables are truncated with the DROP
STORAGE option to reclaim space. Otherwise, they are truncated with the REUSE
STORAGE option to retain the space from the deleted rows. If you never reclaim the
space, its size is limited by the largest volume of order data copied into the backup
tables. By default, space is reused for performance reasons and in order to minimize
downtime: First, inserts are more efficient if space is already allocated. Second,
purging the backup tables is faster if space is reused.
This procedure purges all exchange purge tables to reclaim space. It does not purge
backup tables. The implementation runs TRUNCATE TABLE … DROP STORAGE, so
purged data cannot be restored.
This procedure unconditionally deletes the given order from the database. Note that
this procedure does not issue commit, in contrast to most purge procedures. It is the
responsibility of the user to issue commit or rollback.
This operation is audited.
This procedure purges orders that satisfy the given criteria. It is the main
implementation of row-based order purge. Orders are purged by database job. The
procedure finds the order Ids that satisfy the purge criteria, inserts them into the
OM_ORDER_ID_FOR_PURGE staging table, splits them into chunks, and distributes
the chunks to database jobs for parallel purge. Each chunk is processed by deleting
6-84
Chapter 6
PL/SQL API Reference
one order at a time with periodic commits. This approach ensures that a) an order is either
purged entirely or not at all and b) a purge may succeed partially even in the event of errors.
This operation is audited.
Running this procedure is equivalent to running select_orders and purge_selected_orders.
However, purge_orders always starts a new purge by clearing the
OM_ORDER_ID_FOR_PURGE staging table, whereas select_orders only adds orders Ids
to the staging table.
Table 6-9 describes the possible outcomes. The a_status output parameter is set
accordingly.
Parameters:
• a_status: Returns the purge status.
• a_delete_before: Only orders with a timestamp older than this date and time are eligible
for purging. See "Specifying Purge Criteria" for more information.
• a_order_states: Only orders with one of these states are eligible for purging. See
"Specifying Purge Criteria" for more information.
• a_stop_date: The end of the purge window. If it is not null then the procedure exits when
this date and time are reached. The time is checked after each order delete.
• a_order_id_lt and a_order_id_ge: If a_order_id_ge is not null, only orders with order ID
greater than or equal to this value are eligible for purging. If a_order_id_lt is not null then
only orders with order ID less than this value are eligible for purging. If either of these is
set to om_new_purge_pkg.v_ptn_scope_latest, the purge scope is restricted to the
latest partition(s) where new orders are created.
• a_order_source_mnemonic: If it is not null, only orders with this order source are
eligible for purging. Wildcards are not supported.
6-85
Chapter 6
PL/SQL API Reference
• a_order_type_mnemonic: If it is not null, only orders with this order type are
eligible for purging. Wildcards are not supported.
• a_namespace_mnemonic: If it is not null, only orders in this cartridge namespace
are eligible for purging. Wildcards are not supported.
• a_version_mnemonic: This is used in combination with
a_namespace_mnemonic. If it is not null, only orders in the specified cartridge
namespace and version are eligible for purging. Wildcards are not supported.
• a_cartridge_id: If it is not null, only orders in the specified cartridge are eligible for
purging.
• a_parallelism: Specifies the degree of parallelism (the number of database jobs
performing the purge). If it is null, it uses the parallelism configured by the
degree_of_parallelism parameter. If it is 1, the purge is run serially (with a single
database job). See "Parallel Execution" for more information.
• a_commit_count: Specifies how often each job should issue a commit command.
Unless you performed extensive performance purge tests to determine the optimal
value for this parameter, it is recommended that you leave it null. If the value is
null, the job uses the commit count that is configured in the
purge_commit_count parameter.
Example: The following procedure purges orders with a time limit of 15 minutes and a
parallelism of 8. The purge criteria specify all orders that were closed 30 days ago or
more.
declare
v_status integer;
begin
om_new_purge_pkg.purge_orders(
a_status=>v_status,
a_stop_date => sysdate + 15/24/60, -- 15m
a_delete_before=>trunc(sysdate) - 30,
a_order_states=> om_new_purge_pkg.v_closed_orders,
a_parallelism => 8);
end;
6-86
Chapter 6
PL/SQL API Reference
This procedure inserts into the staging table OM_ORDER_ID_FOR_PURGE the order Ids
that satisfy the given purge criteria. This is useful when you cannot identify all orders to be
purged in a single operation of purge_orders, and you do not want to run multiple purges. In
this case:
• You can populate OM_ORDER_ID_FOR_PURGE piecemeal by running select_orders
several times with different purge criteria. You can also insert or delete order Ids from this
table manually.
• After you finish populating this table, run purge_selected_orders.
Parameters:
• a_selected_count: Returns the number of order Ids inserted into
OM_ORDER_ID_FOR_PURGE by this call. This count ignores order Ids that were
already inserted into this table, even if they match the given purge criteria.
• The rest of the parameters specify the purge criteria and they are the same as in
purge_orders.
Example: The following selects for purge all orders in cartridge namespace X that were
closed 7 days ago and reside on the latest partition(s) (where new orders are created).
declare
v_status integer;
v_selected_count integer;
begin
om_new_purge_pkg.select_orders(
a_selected_count=>v_selected_count,
a_delete_before=>trunc(sysdate) - 7,
a_order_states=>om_new_purge_pkg.v_closed_orders,
a_order_id_ge=>om_new_purge_pkg.v_ptn_scope_latest,
a_namespace_mnemonic => 'X');
end;
6-87
Chapter 6
PL/SQL API Reference
This procedure stops the current order purge if one is running. This procedure call
returns when the purge stops, which normally takes a few seconds.
Later you can resume the same purge by running resume_purge, restart the purge
with different parameters by running purge_selected_orders (for example, if you want
to change the time when the purge window ends or the degree of parallelism), or start
a new purge.
6-88
Chapter 6
PL/SQL API Reference
This procedure resumes a stopped order purge or an order purge that finished with errors.
If you do not supply any arguments or if the given arguments are the same as those of the
initial purge operation, this procedure resumes processing of existing chunks that are either
unassigned or finished processing with errors, using the same degree of parallelism. If you do
supply new or changed arguments, this procedure regenerates chunks allows you to change
certain parameters of the purge operation. For example:
• You can resume a stopped the purge operation with the a_stop_date parameter if you
want to change the end of the purge window.
• You can resume a stopped purge operation with the a_parallelism parameter if you want
to lower the degree of parallelism of an online purge operation (for example, due to an
unexpected increase in order volume).
Parameters:
• a_stop_date: This parameter specifies the end of the purge window. If it is null, the initial
value supplied to the purge operation remains in effect.
• a_commit_count: This parameter specifies how often each job should commit. If it is
null, the initial value supplied to the purge operation remains in effect.
• a_parallelism: This parameter specifies the degree of parallelism (the number of
database jobs performing the purge). If it is null, the initial value supplied to the purge
operation remains in effect.
Note:
Do not use resume_purge to expand the scope of a purge. resume_purge does
not regenerate order ID chunks and any order Ids that fall outside the range of
existing unassigned chunks are not be purged. If you want to expand the scope of a
purge, add order Ids to OM_ORDER_ID_FOR_PURGE and run
purge_selected_orders instead.
Advanced Procedures
This section provides information about advanced procedures.
om_part_maintain.backup_selected_ords (Offline)
procedure backup_selected_ords(
a_parallelism binary_integer default null);
The purge_partitions procedure inspects each partition in the given range and inserts into
the OM_ORDER_ID_FOR_BACKUP table the order IDs of the orders that do not satisfy the
purge criteria. The specified orders are copied into the backup tables and they are restored
after the partitions are purged. The backup_selected_ords and restore_orders procedures
allow you to do the same for arbitrary order IDs, for example, if you want to retain orders for a
particular cartridge.
6-89
Chapter 6
PL/SQL API Reference
Note:
This procedure does not modify data in partitioned tables.
om_part_maintain.restore_orders (Offline)
This procedure restores orders from the backup tables into the partitioned tables, and
purges the backup tables.
procedure restore_orders(
a_parallelism binary_integer default null);
Normally you do not have to use this procedure because purge_partitions restores
orders automatically. However, it might be needed for recovery purposes, as discussed
in the "Troubleshooting and Error Handling" section. It can also be used in conjunction
with backup_selected_ords to exclude arbitrary order IDs from a purge.
Troubleshooting Functions
This sections provides information about troubleshooting functions.
6-90
Chapter 6
PL/SQL API Reference
The returned information includes the table name, partition name, number of subpartitions,
tablespace name, and partition upper bound. If the table name is not OM_ORDER_HEADER,
the specified partition is missing. This function is useful for troubleshooting.
It returns false if the number of range partitions differs from table to table or the schema is not
partitioned. The implementation does not compare the number of subpartitions.
If number of range partitions differs from table to table, this could be the result of interrupted
or failed attempts to add or drop partitions. If the schema is not equi-partitioned, EXCHANGE
PARTITION cannot be used for purging partitions; therefore
om_part_maintain.purge_partitions returns right away. In this case, use
om_part_maintain.equipartition to partition your schema.
Recovery Procedures
This section provides information about recovery procedures.
Partitions are added through ALTER TABLE ADD PARTITION and ALTER TABLE SPLIT
PARTITION operations. It throws an exception if the schema is not partitioned.
Parameters:
• a_missing_ptns: The missing partitions to be added. If it is null, the procedure calls
is_equipartitioned to find all missing partitions.
Exceptions:
• ORA-20166: There is another in-progress maintenance operation.
• ORA-20170: Failed to suspend database jobs.
• ORA-20171: OSM is running.
Error handling: After you resolve the issue, re-run this procedure.
This procedure is not part of regular maintenance operations. It purges orphan order data
from tables that are not range-partitioned (specifically, order data with an order ID that is less
than the minimum order ID in om_order_header). Orphan data could be the result of a failed
operation of purge_partitions or drop_partitions.
6-91
Chapter 6
PL/SQL API Reference
procedure rebuild_unusable_indexes(
a_indexes dbms_sql.varchar2s,
a_parallelism binary_integer default null,
a_online boolean default true,
a_preferred_method varchar2 default null);
These procedures rebuild unusable indexes, and unusable index partitions and sub-
partitions. They are called automatically by other procedures that may leave indexes in
an unusable state, especially global indexes, such as drop_partitions,
purge_partitions, and equipartition.
Parameters:
• a_table_name_like: Restricts the scope of the operation to indexes of the
specified table name(s). You may use wildcards. The default is OM_% (for
example, exchange tables are ignored).
• a_indexes: The names of the indexes to be rebuilt.
• a_parallelism: Specifies the degree of parallelism. Indexes are altered back to
NOPARALLEL afterward they are rebuilt. It is recommended that you leave it null.
The implementation will choose the optimal method for each unusable index
depending on the index type and configuration parameters. For more information
see purge_policy_rebuild_unusable_indexes.
• a_online: Tells whether indexes should be rebuilt online in order to avoid failure
from contention.
• a_preferred_method: The preferred rebuild method. Valid values are:
– om_part_maintain.c_rebuild_idx_rebuild (REBUILD)
– om_part_maintain.c_rebuild_idx_recreate_global (RECREATE GLOBAL).
For more information see purge_policy_rebuild_unusable_indexes.
6-92
Chapter 6
Database Reference
implementation will choose the optimal method depending on the index type and
configuration parameters. For more information see
purge_policy_rebuild_unusable_indexes.
• a_online: Tells whether the index should be rebuilt online in order to avoid failure from
contention.
• a_preferred_method: The preferred rebuild method. Valid values are:
– om_part_maintain.c_rebuild_idx_rebuild (REBUILD)
– om_part_maintain.c_rebuild_idx_recreate_global (RECREATE GLOBAL).
For more information see purge_policy_rebuild_unusable_indexes.
om_part_maintain.sys$undo_restore_table (Offline)
This is an internal procedure that should be used strictly for recovery purposes.
procedure sys$undo_restore_table(
a_table_name varchar2,
a_parallelism binary_integer default null);
Note:
This is an internal procedure that should be used strictly for recovery purposes.
om_part_maintain.sys$undo_restore_orders (Offline)
procedure sys$undo_restore_orders(
a_parallelism binary_integer default null);
Database Reference
The following sections provide information about database views and database tables.
6-93
Chapter 6
Database Reference
Database Views
The following sections provide information about database audit views.
OM_AUDIT_PURGE_ALL
The OM_AUDIT_PURGE_ALL view returns information about all order purges in
descending order (the latest purge is returned first).
Table 6-10 lists and describes the columns in the OM_AUDIT_PURGE_ALL table.
6-94
Chapter 6
Database Reference
OM_AUDIT_PURGE_LATEST
The OM_AUDIT_PURGE_LATEST views is identical the OM_AUDIT_PURGE_ALL view
except that it returns information only about the latest purge (see
"OM_AUDIT_PURGE_ALL"). This view is useful for monitoring.
6-95
Chapter 6
Database Reference
Database Tables
The following sections provide information about audit related database tables.
OM_AUDIT_PURGE
The OM_AUDIT_PURGE table describes each order purge. Each audited purge
operation adds a record to this table to monitor the purge operation as soon as the
purge starts.
Table 6-11 describes the OM_AUDIT_PURGE table. This table is partitioned by
START_DATE. Each partition corresponds to a different month.
6-96
Chapter 6
Database Reference
OM_AUDIT_PURGE_ORDER
The OM_AUDIT_PURGE_ORDER table stores a synopsis for each purged order including
the order ID and all attributes that are used to determine whether an order satisfies the purge
criteria. Orders are added to this table as they are purged and become visible as transactions
commit. This ability allows you to monitor the purge rate.
Table 6-12 describes the OM_AUDIT_PURGE_ORDER table. This table is reference-
partitioned with OM_AUDIT_PURGE as the parent table.
6-97
Chapter 6
Database Reference
OM_AUDIT_PURGE_PARAM
The OM_AUDIT_PURGE_PARAM table stores the purge arguments and criteria
supplied to the purge procedure and a snapshot of relevant session and configuration
parameters at the time the purge was started. The following parameters are included:
• Arguments of the purge procedure that specify purge criteria, such as
a_delete_before, a_order_states, a_order_id_lt, a_order_id_ge,
a_order_source_mnemonic, a_order_type_mnemonic,
a_namespace_mnemonic, a_version_mnemonic, and a_cartridge_id.
• Arguments of the purge procedure other than purge criteria, such as a_stop_date,
a_parallelism and a_commit_count.
• Database session parameters that identify who ran the purge and where, such as
BG_JOB_ID, FG_JOB_ID, HOST, INSTANCE_NAME, OS_USER,
SERVICE_NAME, SESSION_USER, and SID.
• Purge-related configuration parameters in the om_parameter table, such as
degree_of_parallelism, parallel_execute_chunk_size, oms_timezone, and
purge_job_class.
Table 6-13 describes the OM_AUDIT_PURGE_PARAM table. This table is reference-
partitioned with OM_AUDIT_PURGE as the parent table.
6-98
Chapter 6
Troubleshooting and Error Handling
Note:
In the event of failure during a purge operation, Oracle strongly recommends that
you stop OSM and perform all troubleshooting and recovery operations offline.
The PL/SQL API provides functions and procedures to troubleshoot and recover from errors.
Most procedures for managing partitions use om_sql_log_pkg, which is an internal package
that enables procedures to persist and run SQL statements so that processing can be
resumed in the event of an error. This is particularly useful for DDL statements.
The om_sql_log_pkg package persists SQL statements in the om_sql_log table, which
includes the following columns:
• sid: The session ID. The default value is the current session ID, for example,
sys_context('USERENV', 'SID'). This allows concurrent processing.
• name: This is usually the name of the procedure that generated the SQL statement. It is
useful to Oracle Support.
• line: This is a line number used for ordering the SQL statements to be run.
• sql_text: The SQL statement.
SQL statements persisted in om_sql_log are run by om_sql_log_pkg.exec. This procedure
runs all SQL statements with the specified session ID, ordered by line number. If you do not
specify the session ID, it uses the current one. When run, the line number of the current
statement is updated in the om_sql_pointer table. This allows you to monitor the procedure.
When successful, it deletes all statements with that session ID. In the event of failure,
however, it inserts in the om_sql_pointer table the error message with the session ID and
line number of the failed statement. In this case, when om_sql_log_pkg.exec is re-run, it
resumes with the failed statement.
6-99
Chapter 6
Troubleshooting and Error Handling
Therefore, you can troubleshoot and recover from a failed partition maintenance
operation even if it was run by a scheduled job. The contents of om_sql_log and
om_sql_pointer allow for faster assistance from Oracle Support. After you fix the root
cause of a failure, in some cases you can resume the operation from the point of
failure. This ensures that your data is not left in an inconsistent state (although in some
cases you might have to take additional actions if you want to complete that
operation).
If you resume a failed operation from a different database session, or you abandon
that operation, you must manually delete the rows for the failed session by running the
following statement:
execute om_sql_log_pkg.remove(sid);
Example: You can review the set of SQL statements of a partition maintenance
operation that failed in the current session as follows:
select * from om_sql_log
where sid = sys_context('USERENV', 'SID')
order by line;
6-100
Chapter 6
Troubleshooting and Error Handling
• The recommended action is to re-run the procedure with the same argument. If this is not
possible (for example, because you cannot afford further downtime), do the following:
– You must at least run rebuild_unusable_indexes to ensure indexes are usable.
– (Optional) You can run purge_orphan_order_data to delete orphan order data.
Otherwise, orphan data is deleted by the next execution of purge_partitions or
drop_partitions.
– Run om_job_pkg.resume_jobs to resume database jobs.
• When in doubt, run is_equipartitioned to check whether the schema is equi-partitioned.
If it is not, you can run equipartition to fix it.
Troubleshooting
In the event of an unexpected error, purge_partitions re-enables any disabled constraints
and throws the exception again. However, failures could result in partitioning inconsistencies,
orphan order data, and even data loss if you do not follow recovery procedures.
If you spooled the output of the stored procedure to a file (recommended), review the file to
determine the reason and point of failure. If the purge capacity is greater than 1, the file also
indicates which purge tables were involved. You can identify the point of failure by reviewing
the started and finished messages that mark the beginning and end of the procedure.
A failure may occur in these procedures:
• sys$bpdr_backup: Copies the orders that do not satisfy the purge criteria into the
backup tables.
• sys$bpdr_purge: Purges one or more partitions entirely by exchanging them with purge
tables.
• sys$bpdr_drop: Drops N-1 empty partitions, where N is the number of purged partitions.
• sys$bpdr_restore: Restores the retained orders from the backup tables into the Nth
partition.
• rebuild_unusable_indexes: Rebuilds all or specific unusable indexes as required. It is
run:
– By sys$bpdr_backup prior to copying orders into the backup tables.
– By sys$bpdr_restore prior to restoring retained orders.
– At the end, prior to sys$purge_orphan_order_data.
• sys$purge_orphan_order_data: Purges orphan order data (run at the end).
• sys$purge_xchg_prg_tables: If the a_purge_xchg_prg_tables argument is true, it is
run at the end to purge the purge tables. It may also be run prior to purging a group of
successive partitions, if the purge capacity is exhausted.
If the procedure output is not available, inspect the following for clues.
6-101
Chapter 6
Troubleshooting and Error Handling
• Most of the time you can determine the error and the point of failure by reviewing
the om_sql_log and om_sql_pointer tables.
– om_sql_pointer points to the SID (session ID) and line of the failed statement
in om_sql_log. If there are several errors in om_sql_pointer, check the
error_date column to find the SID of the most recent error.
– If om_sql_log includes EXCHANGE PARTITION statements, the procedure
failed in sys$bpdr_purge. Partitions are in an inconsistent state.
– If om_sql_log includes INSERT statements into the backup tables, the
procedure failed in sys$bpdr_backup. Partitions are in a consistent state.
– If om_sql_log includes calls to sys$restore_table, the procedure failed in
sys$bpdr_restore. Partitions contain partially restored orders.
– If om_sql_log includes statements to rebuild indexes, the procedure failed in
rebuild_unusable_indexes.
• Review the backup tables:
– If the backup tables are empty then there are a number of possibilities, such
as a) no orders were retained, b) the failure occurred prior to
sys$bpdr_backup, or c) sys$bpdr_restore purged the backup tables after a
successful restore.
– If the backup tables are not empty then the failure occurred either during or
after sys$bpdr_backup and possibly during sys$bpdr_restore.
– If none of the order IDs in XCHG_OM_BCK_$001$ exist in
OM_ORDER_HEADER then most likely the failure occurred during or after
sys$bpdr_purge. Check the remaining partitioned tables listed in the
OM_XCHG_TABLE table. If you cannot find those order IDs in any of those
tables then sys$bpdr_purge completed successfully (the data was
exchanged into the purge tables). There is also a remote possibility that the
failure occurred in sys$bpdr_restore while restoring retained orders into
OM_ORDER_HEADER (the first table to be restored). In this case,
user_parallel_execute_tasks should include a task with task_name equal to
restore:om_order_header.
– If some but not all of the order IDs in XCHG_OM_BCK_$001$ exist in
OM_ORDER_HEADER then the failure occurred in sys$bpdr_restore while
restoring retained orders into OM_ORDER_HEADER (the first table to be
restored). In this case, user_parallel_execute_tasks should include a task
with task_name equal to restore:om_order_header.
– If all of the order IDs in XCHG_OM_BCK_$001$ exist in
OM_ORDER_HEADER, check whether all the data in the remaining backup
tables exist in the corresponding partitioned tables (the OM_XCHG_TABLE
table specifies the exchange table ID for each partitioned table). If this is not
the case then the failure occurred during sys$bpdr_purge or
sys$bpdr_restore.
• Review the purge tables, especially those that correspond to
OM_ORDER_HEADER (for example, XCHG_OM_PRG_001$001$). If they do not
contain any data, most likely the purge failed before the sys$bpdr_purge
procedure. If the purge capacity is greater than 1, check the
sys$xchg_purge_seq parameter in the om_parameter table to find out which set
of purge tables was used for the latest purge.
6-102
Chapter 6
Troubleshooting and Error Handling
• Review the affected partitions. If a partition in the purge range is empty, most likely it was
exchanged with the purge tables (it is also possible that it was previously empty). Check
the purge tables to confirm.
• Review the user_parallel_execute_tasks view in the OSM core schema. If the view
contains any tasks with task_name equal to restore:tableName, the procedure failed in
sys$bpdr_restore while restoring data into the tableName table (assuming the previous
procedure of purge_partitions was successful).
Error Handling
When you determine the point of failure, as discussed in "Troubleshooting," and you resolve
the issue, you can recover and finish the purge operation as follows:
• If the failure occurred during sys$bpdr_backup, the partitions are in a consistent state.
Run om_part_maintain.purge_xchg_bck_tables and om_sql_log_pkg.remove(SID)
to purge the backup tables, om_sql_log and om_sql_pointer.
• If the failure occurred during sys$bpdr_purge, the partitions are in an inconsistent state
(partially purged):
1. Run om_sql_log_pkg.exec(SID) to finish the purge (exchange), where SID is the
session ID of the failed operation (the SID is recorded in om_sql_pointer together
with the error message).
2. If you were consolidating partitions N-to-1, drop the N-1 partitions before the Nth
partition, which was exchanged. To drop those partition, use the following statements
instead of drop_partitions:
ALTER TABLE OM_ORDER_HEADER DROP PARTITION partition_name;
ALTER TABLE OM_SEQUENCE DROP PARTITION partition_name;
3. If the backup tables are not empty, run om_part_maintain.restore_orders with the
desired degree of parallelism to rebuild unusable indexes and restore the retained
orders.
• If the failure occurred during sys$bpdr_drop while consolidating partitions: When you
consolidate partitions N-to-1, purge_partitions copies retained orders into the backup
tables, purges (exchanges) the Nth partition, drops N-1 partitions, and restores the
retained orders into the Nth partition.
– If the om_sql_log table contains the DROP PARTITION statements, run
om_sql_log_pkg.exec(SID), where SID is the session ID of the failed operation (the
SID is recorded in om_sql_pointer together with the error message). In some
releases, the DROP PARTITION statements are not logged in the om_sql_log table.
In this case, you can find them in the DBMS output. If you do not have the DBMS
output, run these statements:
ALTER TABLE OM_ORDER_HEADER DROP PARTITION partition_name;
ALTER TABLE OM_SEQUENCE DROP PARTITION partition_name;
– If the backup tables are not empty, run om_part_maintain.restore_orders with the
desired degree of parallelism to rebuild unusable indexes and restore the retained
orders.
• If the failure occurred during sys$bpdr_restore and you fixed the root cause, Oracle
recommends that you finish the restore operation. The partitions are in an inconsistent
state (retained orders are not fully restored). The backup tables are not affected and they
contain all retained orders. To resume the restore operation from the point of failure:
6-103
Chapter 6
Troubleshooting and Error Handling
6-104
Chapter 6
Performance Tuning
Performance Tuning
This section explains how to tune the following:
• degree_of_parallelism
• degree_of_parallelism_rebuild_indexes
• degree_of_parallelism_rebuild_xchg_indexes
• Parallel job execution
• Row-based purge
Tuning degree_of_parallelism
This parameter specifies the default DOP for queries, DML, and most DDL operations. In
particular, it affects the performance of order backup and restore statements performed by
purge_partitions. You can use the following procedure to evaluate the optimal
degree_of_parallelism without performing a purge.
To evaluate the optimal degree_of_parallelism:
1. Clear the om_order_id_for_backup table.
delete from om_order_id_for_backup;
2. Select a representative number of order IDs that does not exceed the value of
xchg_retained_orders_thres from a single partition. For example, if you frequently
retain 10,000 orders when you purge partitions:
insert into om_order_id_for_backup (
select order_seq_id
from om_order_header partition (P_000000000003000001)
where rownum <= 10000);
commit;
3. Back up the selected order IDs with the desired parallelism (for example, 16) and record
the elapsed time:
exec om_part_maintain.backup_selected_ords(16);
5. Repeat with a different degree of parallelism and compare the elapsed times until you
find the optimal DOP.
Tuning degree_of_parallelism_rebuild_indexes
The best way to determine the optimal DOP for degree_of_parallelism_rebuild_indexes is
through trials. Try purging or dropping partitions with different settings for this parameter, and
review the DBMS output to compare the elapsed times for rebuilding indexes.
6-105
Chapter 6
Performance Tuning
Tuning degree_of_parallelism_rebuild_xchg_indexes
The optimal DOP for degree_of_parallelism_rebuild_xchg_indexes is normally 1
(the default) because these indexes are very small and they are rebuilt one partition at
a time. There is rarely a reason to increase this value.
Tuning parallel_execute_chunk_size
The implementation of purge_partitions uses the dbms_parallel_execute package
to restore retained orders, which uses database jobs to cause execution to be in
parallel. Order data is restored one table at a time, and each table is divided into
chunks. Each job is assigned a chunk, commits the transaction, gets the next chunk,
and so on. The process repeats for the next table. For example, if the degree of
parallelism is 32 and 64 chunks are created, 32 chunks will be processed concurrently
by jobs and they will be committed at about the same time before the remaining 32
chunks are processed.
The number of chunks created depends primarily on the volume of data, the number of
sub-partitions and the specified chunk size. The default value of
parallel_execute_chunk_size is 2000 blocks. If the size of the retained order data is
small to moderate, this chunk size normally results in as many chunks as sub-
partitions (for example, 32 or 64), which is found to work well.
Beginning with 7.2.0.10.2, 7.2.2.5, and 7.2.4.2, each table to be restored is divided
separately into chunks. This means that the number of chunks is different for each
table. However, the volume of data for each chunk is about the same, regardless of
the table. This results in shorter transactions (more frequent commits) that require less
UNDO. Therefore, the default parallel_execute_chunk_size (2000 blocks) results in
good performance, regardless of the volume of data retained, and there is rarely a
need to change it.
Prior to 7.2.0.10.2, 7.2.2.5, and 7.2.4.2, the number of restore chunks is the same for
all tables because chunks are generated from the XCHG_OM_BCK_$001$ table.
However, the volume of data for each chunk varies from table to table. If the volume of
retained order data is very large (for example, tens of thousands of orders), the chunks
for large tables such as OM_ORDER_INSTANCE are large transactions that generate
a lot of UNDO and therefore require a large UNDO tablespace.
In this case, it might be better to increase the number of chunks in order to increase
the frequency of commits and reduce the UNDO size. For example, if your
6-106
Chapter 6
Database Terms
performance tests show that committing every 500 orders is more efficient in terms of
elapsed time and/or UNDO size, and you normally retain about 100000 orders, the optimal
number of chunks would be 200. To increase the number of chunks you must decrease the
parallel_execute_chunk_size.
If you are not sure how chunks are generated at your patch level, review the restore
statements. If they are joins, chunks are generated as in 7.2.0.10 or earlier.
Prior to 7.2.0.10.2, 7.2.2.5 and 7.2.4.2, use the following procedure to find out how different
parallel_execute_chunk_size settings affect the number of chunks created. If you are using
7.2.0.10.2, 7.2.2.5, 7.2.4.2 or later, there is rarely a need to tune
parallel_execute_chunk_size. However, if you want to do so, you can substitute
om_order_header and xchg_om_bck_$001$ in the following procedure with any other
partitioned table and the corresponding xchg_om_bck_table to find out the number of
chunks that will be created for that partitioned table.
To find out how different parallel_execute_chunk_size settings affect the number of chunks
created (7.2.0.10.2, 7.2.2.5, 7.2.4.2, or earlier):
1. Ensure the exchange tables are created.
2. Populate the backup tables with a large number of orders to retain, preferably all in the
same partition (substitute x and y, so that the range (x, y) contains the desired number of
orders):
insert into xchg_om_bck_$001$
(select * from om_order_header where order_seq_id between x and y) ;
3. Repeat the following procedures and with different values for chunk_size (20, 100, 200,
and so on), until the query returns a count close to the desired number of chunks:
exec dbms_parallel_execute.create_task('CHECK_CHUNK_TASK') ;
exec dbms_parallel_execute.drop_chunks('CHECK_CHUNK_TASK') ;
exec dbms_parallel_execute.create_chunks_by_rowid(
'CHECK_CHUNK_TASK', user, 'XCHG_OM_BCK_$001$', by_row => false, chunk_size
=>20) ;
select count(*) from user_parallel_execute_chunks
where task_name = 'CHECK_CHUNK_TASK'
order by chunk_id;
exec dbms_parallel_execute.drop_task('CHECK_CHUNK_TASK') ;
Database Terms
This chapter uses the following database terms:
• Automatic Workload Repository (AWR): A built-in repository in every Oracle database.
Oracle Database periodically makes a snapshot of its vital statistics and workload
information and stores them in AWR.
6-107
Chapter 6
Database Terms
6-108
7
Managing Optimizer Statistics
This chapter contains best practices for gathering optimizer statistics for the Oracle
Communications Order and Service Management (OSM) product. Using the best practices in
this chapter will result in better and more stable execution plans for SQL objects in the OSM
database.
7-1
Chapter 7
Gathering Optimizer Statistics
For a list of steps and procedures that can be used to bootstrap and maintain the OSM
Database Optimizer Statistics Management process with OSM releases that support
locking of partition statistics, see knowledge article 1925539.1, New OSM Database
Optimizer Statistics Management, on the Oracle support website for additional
information:
https://support.oracle.com
You can determine if the automatic optimizer statistics collection maintenance task is
enabled by running the following commands as a SYSDBA user:
set serveroutput on
SELECT client_name, status FROM dba_autotask_operation;
You can disable the automatic optimizer statistics collection maintenance task by
running the following commands as a SYSDBA user:
BEGIN
DBMS_AUTO_TASK_ADMIN.DISABLE(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
END;
/
You can enable the automatic optimizer statistics collection maintenance task by
running the following commands as a SYSDBA user:
BEGIN
DBMS_AUTO_TASK_ADMIN.ENSABLE(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
END;
/
7-2
Chapter 7
Gathering Optimizer Statistics
You can gather fixed object statistics when there is a representative load on the system
(ideally at peak utilization) by running the following commands as a SYSDBA user:
execute DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
7-3
Chapter 7
Gathering Optimizer Statistics
• OM_ORDER_STATE_PENDING
• OM_ORDER_STATE_EVENT_PENDING
• OM_COORD_NODE_INSTANCE
You should not enable incremental statistics on highly volatile tables. However, Oracle
recommends that you lock statistics for these tables, using the following command:
execute om_db_stats_pkg.lock_volatile_order_stats;
7-4
Chapter 7
Gathering Optimizer Statistics
However, if most of your orders complete in more than 1 hour, run the following instead:
execute om_db_stats_pkg.unlock_volatile_order_stats;
execute om_db_stats_pkg.set_table_volatility('OM_ORDER_FLOW',
om_const_pkg.v_volatility_low);
execute om_db_stats_pkg.set_table_volatility('OM_AUTOMATION_CTX',
om_const_pkg.v_volatility_low);
execute om_db_stats_pkg.set_table_volatility('OM_AUTOMATION_CORRELATION',
om_const_pkg.v_volatility_low);
execute om_db_stats_pkg.set_table_volatility('OM_ORDER_POS_INPUT',
om_const_pkg.v_volatility_low);
execute om_db_stats_pkg.set_table_volatility('OM_UNDO_BRANCH_ROOT',
om_const_pkg.v_volatility_low);
execute om_db_stats_pkg.set_table_volatility('OM_ORCH_DEPENDENCY_PENDING',
om_const_pkg.v_volatility_low);
execute om_db_stats_pkg.lock_volatile_order_stats;
You should then confirm that INCREMENTAL_STALENESS is configured, using the following
command:
SELECT dbms_stats.get_prefs(pname=>'INCREMENTAL_STALENESS',
tabname=>'OM_ORDER_INSTANCE') FROM dual;
If this generates an error, review the list of database patches installed on your system.
Otherwise, you can confirm that INCREMENTAL_STALENESS is now configured correctly by
re-running the confirmation command you ran earlier, that is:
SELECT dbms_stats.get_prefs(pname=>'INCREMENTAL_STALENESS',
tabname=>'OM_ORDER_INSTANCE') FROM dual;
7-5
Chapter 7
Preparing a New Partition
7-6
Chapter 7
Preparing a New Partition
If you have changed the partition size and want to copy stats from an older partition of
different size, you can use scale factor to scale the stats. Use the following commands:
Example of scaling up stats into destination partition:
declare
v_copied boolean;
begin
om_db_stats_pkg.copy_order_ptn_stats(v_copied,
a_src_partition_name => 'P_000000000000400001',
a_dst_partition_name => 'P_000000000000700001',
scale_factor => '2');
end;
7-7
Chapter 7
Preparing a New Partition
Note:
Because the scale_factor parameter is a varchar2 argument, it must be
provided in single quotes. The value can be any positive decimal number.
To copy recent and valid statistics to the most recently created order partition, as well
as to the corresponding partitions of all reference-partitioned tables, you can use:
declare
v_copied boolean;
begin
om_db_stats_pkg.copy_order_ptn_stats(v_copied);
end;
You should lock statistics on an empty partition after copying statistics into that
partition and you should leave statistics locked on that partition when it becomes
active.
7-8
Chapter 7
Configuring a Partition When It Is No Longer the Active Partition
You should unlock statistics on a partition when it matures (that is, once it is no longer active).
This should be done following a switch to a new active partition.
As the OSM order management user, you can remove stuck export jobs using the following:
declare
dpj number;
begin
dpj := dbms_datapump.attach('EXPORT_ORDER_PTN_STATS', user);
dbms_datapump.stop_job(dpj, 1, 0);
end;
You can remove stuck import jobs using the following commands:
declare
dpj number;
begin
dpj := dbms_datapump.attach('IMPORT_ORDER_PTN_STATS', user);
dbms_datapump.stop_job(dpj, 1, 0);
end;
7-9
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
Degree of Parallelism attribute of the table in the data dictionary. Because the degree
of parallelism is 1 for all OSM tables and indexes, Oracle recommends that you set the
DEGREE as a schema preference.
To do this, run the following command as the OSM order management user:
execute DBMS_STATS.SET_SCHEMA_PREFS(user, 'DEGREE', 'DBMS_STATS.AUTO_DEGREE');
However, if you gather statistics manually while the database is processing a workload
(for example, when gathering statistics for high volatility tables), you should
temporarily set a low value for DEGREE.
Note that the actual degree of parallelism can be between 1 (serial execution) for small
objects to DEFAULT_DEGREE for large objects.
Caution:
Do not change the degree of parallelism attribute of any OSM table or index.
This is not supported.
Cursor Invalidations
When statistics are modified by DBMS_STATS, new cursors that are not yet cached in
the shared pool use updated statistics to get execution plans. However, existing
cursors that are cached in the shared pool cannot update their execution plans.
Instead, such cursors are invalidated and new versions, children cursors, are created.
This results in execution plans based on the updated statistics. This involves a hard-
parse operation that is more expensive than a soft-parse, which simply reuses a
cached cursor. For this reason, Oracle Database spreads cursor invalidations over a
time period long enough for hard-parses not to cause noticeable spikes in resource
usage. This time period is 5 hours by default and it is controlled by the
_optimizer_invalidation_period initialization parameter (in seconds).
If your database has performance issues that are caused by bad execution plans
because of stale or missing statistics, 5 hours is a long time to wait. Oracle therefore
recommends that you decrease the value of _optimizer_invalidation_period. For
example, the following command sets _optimizer_invalidation_period to 600
seconds.
alter system set "_optimizer_invalidation_period"=600 scope=both;
If 10 minutes turns out to be too short to avoid significant spikes caused by parsing in
your environment, increase the value accordingly.
7-10
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
om_db_stats_pkg.lock_volatile_order_stats
This procedure locks statistics on volatile order tables.
procedure lock_volatile_order_stats;
om_db_stats_pkg.unlock_volatile_order_stats
This procedure unlocks statistics on volatile order tables.
procedure unlock_volatile_order_stats;
om_db_stats_pkg.set_table_prefs_incremental
This procedure sets the INCREMENTAL statistics preference for partitioned OSM tables that
have the specified volatility level.
procedure set_table_prefs_incremental(
a_incremental boolean,
a_volatility number);
Parameters:
• a_incremental: Specifies whether you want statistics to be gathered incrementally on
partitioned OSM tables that have the specified volatility level. When set to true, the
PUBLISH preference is also set to true because this is required for incremental statistics
collection.
• a_volatility: Specifies the volatility level of partitioned OSM tables for which the
INCREMENTAL statistics preference should be set.
Exception:
• ORA-20165: Illegal argument: Invalid volatility level.
om_db_stats_pkg.set_table_volatility
This procedure sets the volatility level for an OSM table. The volatility level for OSM tables is
configured in OM_$INSTALL$TABLE.
Parameters:
• a_table_name: Specifies the name of the table on which to set the volatility level.
• a_volatility: Specifies the volatility level to set. Valid values are
om_const_pkg.v_volatility_none, om_const_pkg.v_volatility_low,
om_const_pkg.v_volatility_medium, and om_const_pkg.v_volatility_high.
Maintenance Procedures
This section describes maintenance procedures.
7-11
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
om_db_stats_pkg.gather_cartridge_stats
This procedure gathers statistics for cartridge metadata tables.
procedure gather_cartridge_stats;
om_db_stats_pkg.gather_order_stats
This procedure gathers statistics for order tables configured with the specified volatility
level.
procedure gather_order_stats(
a_volatility number default null,
a_force boolean default false);
Parameters:
• a_volatility: The level of volatility of order tables for which statistics should be
gathered. Null by default, which means all volatility levels.
• a_force: Specifies whether you want statistics to be gathered on order tables even
if their statistics are locked. The default is false.
Exception:
• ORA-20165: Illegal argument: Invalid volatility level.
om_db_stats_pkg.gather_volatile_order_stats
This procedure gathers statistics for volatile order tables.
procedure gather_order_stats(
a_force boolean default false);
Parameters:
• a_force: Specifies whether you want statistics to be gathered on volatile order
tables even if their statistics are locked. The default is false. An order table is
deemed volatile if its volatility level is set to om_const_pkg.v_volatility_high.
om_db_stats_pkg.copy_order_ptn_stats
This procedure copies order partition statistics from the specified source order partition
to the specified destination order partition.
procedure copy_order_ptn_stats(
a_copied out boolean,
a_dst_partition_name varchar2 default null,
a_src_partition_name varchar2 default null);
Parameters:
• a_copied: Output parameter indicating whether statistics were successfully
copied.
• a_dst_partition_name: Specifies the name of the order partition to which you
want to copy statistics. If you do not specify this parameter, the most recently
added partition is used.
7-12
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
• a_src_partition_name: Specifies the name of the order partition from which you want to
copy statistics. If not specified, a partition with the most recent valid statistics is used, if
available. If no valid partition statistics are available, a_copied is set to false.
Exceptions:
• ORA-20142: Operation is not allowed: OSM schema is not partitioned.
• ORA-20165: Illegal argument: Partition does not exist.
• ORA-20165: Illegal argument: The source partition cannot be the same as the destination
partition
• ORA-20144: Function returned unexpected value. Internal error. Contact support: Cannot
find the newest partition.
om_db_stats_pkg.lock_order_ptn_stats
This procedure locks order partition statistics for the specified partition. Statistics of the
corresponding partitions of reference partition tables are also locked.
procedure lock_order_ptn_stats(
a_partition_name varchar2);
Parameters:
• a_partition_name: Specifies the name of the order partition to lock.
Exceptions:
• ORA-20165: Illegal argument: Partition does not exist.
om_db_stats_pkg.unlock_order_ptn_stats
This procedure unlocks order partition statistics for the specified partition. Statistics of the
corresponding partitions of reference partition tables are also unlocked.
procedure unlock_order_ptn_stats(
a_partition_name varchar2);
Parameters:
• a_partition_name: Specifies the name of the order partition to unlock.
Exceptions:
• ORA-20165: Illegal argument: Partition does not exist.
Advanced Procedures
This section describes advanced procedures.
om_db_stats_pkg.export_order_ptn_stats
This procedure exports order partition statistics from the specified order partition to the
specified statistics table. If that table already exists, it is dropped before exporting statistics to
the statistics table.
procedure export_order_ptn_stats(
a_exported out boolean,
7-13
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
Parameters:
• a_exported: Output parameter indicating whether statistics were successfully
exported.
• a_src_partition_name: Specifies the name of the order partition from which you
want to export statistics. If not specified, a partition with the most recent valid
statistics is used, if available. If no valid partition statistics are available,
a_exported is set to false.
• a_stat_table_name: Specifies the name of the statistics table to which to export
statistics. Defaults to c_om_order_stat_table ('OM_ORDER_STAT_TABLE'). If
this statistics table already exists, it is dropped and recreated before exporting
statistics from the specified partition; if it is not a statistics table, the table is not
dropped and an exception is raised.
Exceptions:
• ORA-20142: Operation is not allowed: OSM schema is not partitioned.
• ORA-20165: Illegal argument: Partition does not exist.
• ORA-20165: Illegal argument: Invalid table name.
• ORA-20165: Illegal argument: Table is not a statistics table.
om_db_stats_pkg.import_order_ptn_stats
This procedure imports order partition statistics from the specified statistics table to the
specified destination order partition.
procedure import_order_ptn_stats(
a_imported out boolean,
a_dst_partition_name varchar2 default null,
a_stat_table_name varchar2 default c_om_order_stat_table);
Parameters:
• a_imported: Output parameter indicating whether statistics were successfully
imported.
• a_dst_partition_name: Specifies the name of the order partition to which you
want to import statistics. If you do not specify this parameter, the most recently
added partition is used.
• a_stat_table_name: Specifies the name of the statistics table from which to
import statistics. The default becomes c_om_order_stat_table
('OM_ORDER_STAT_TABLE').
Exceptions:
• ORA-20142: Operation is not allowed: OSM schema is not partitioned.
• ORA-20165: Illegal argument: Partition does not exist.
• ORA-20165: Illegal argument: Invalid table name.
• ORA-20165: Illegal argument: Table is not a statistics table.
• ORA-20144: Function returned unexpected value. Internal error. Contact support:
Cannot find the newest partition.
7-14
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
om_db_stats_pkg.expdp_order_ptn_stats
This procedure saves order partition statistics from the specified statistics table to the
DATA_PUMP_DIR directory. A .dmp suffix is added to the table name to form the name of the
file to which statistics will be saved; for example, OM_ORDER_STAT_TABLE.dmp. If that file
already exists, it is deleted before saving statistics to the file system.
procedure expdp_order_ptn_stats(
a_saved out boolean,
a_stat_table_name varchar2 default c_om_order_stat_table);
Parameters:
• a_saved: Output parameter indicating whether statistics were successfully saved.
• a_stat_table_name: Specifies the name of the statistics table from which to obtain
statistics. The default becomes c_om_order_stat_table ('OM_ORDER_STAT_TABLE').
Exceptions:
• ORA-20165: Illegal argument: Invalid table name.
• ORA-20165: Illegal argument: Table does not exist.
• ORA-20165: Illegal argument: Table is not a statistics table.
• ORA-20142: Operation is not allowed: Failed to save partition statistics to file system.
om_db_stats_pkg.impdp_order_ptn_stats
This procedure loads order partition statistics into the specified statistics table from the
DATA_PUMP_DIR directory. A .dmp suffix is added to the table name to form the name of the
file from which statistics will be loaded; for example, OM_ORDER_STAT_TABLE.dmp.
procedure impdp_order_ptn_stats(
a_loaded out boolean,
a_stat_table_name varchar2 default c_om_order_stat_table);
Note:
If partitioned statistics came from another system, they can be imported only if user
names are the same in both the source and destination systems.
Parameters:
• a_loaded: Output parameter indicating whether statistics were successfully loaded.
• a_stat_table_name: Specifies the name of the statistics table into which you want to
load statistics. The default becomes c_om_order_stat_table
('OM_ORDER_STAT_TABLE'). If this statistics table already exists, it is dropped and
recreated before loading statistics from the file system. If it is not a statistics table, the
table is not dropped and an exception is raised.
Exceptions:
• ORA-20165: Illegal argument: Invalid table name.
• ORA-20165: Illegal argument: File not found in DATA_PUMP_DIR directory.
7-15
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
Troubleshooting Procedures
This section describes the troubleshooting procedures.
om_db_stats_pkg.lstj_copy_order_ptn_stats
This procedure lists active copy_order_ptn_stats jobs. The output includes a job ID
that can be used to remove the job using remj_copy_order_ptn_stats.
procedure lstj_copy_order_ptn_stats;
om_db_stats_pkg.get_order_ptn_stats
This procedure lists statistics for table partitions that match the given filter criteria.
procedure get_order_ptn_stats;
om_db_stats_pkg.list_order_ptn_stats
This procedure outputs statistics for table partitions that match the given filter criteria
to dbms_output.
procedure list_order_ptn_stats;
om_db_stats_pkg.check_order_ptn_stats
This procedure validates the statistics on the schema to check for any errors. There
are two versions of this procedure: one outputs it to dbms_output and the other returns
it as a collection of strings (for external use).
procedure check_order_ptn_stats;
This procedure looks for the following conditions and creates the appropriate level
message (in brackets) if the condition is violated:
• Missing statistics on Order Data Tables (CRITICAL)
• Empty or active partitions with unlocked statistics (CRITICAL)
• Mature partitions with locked statistics (WARNING)
• Statistics are locked with 0 rows on tables that should never have 0 rows when a
partition is used to store orders (e.g., OM_ORDER_HEADER) (CRITICAL)
• Statistics are locked with a small number of rows on tables that should have a
large number of rows in order to be representative of a partition used to store a
large number of orders (e.g., OM_ORDER_HEADER) (CRITICAL)
• Incremental statistics do not work with locked partition statistics (CRITICAL)
• Incremental statistics are not enabled on low or medium volatility tables (MAJOR)
• Incremental statistics are enabled on high volatility tables (MAJOR)
• Statistics are not locked on high-volatility tables (CRITICAL)
7-16
Chapter 7
Optimizer Statistics Management PL/SQL API Reference
Recovery Procedures
This section describes the recovery procedures.
om_db_stats_pkg.remj_copy_order_ptn_stats
This procedure removes the specified copy_order_ptn_stats job.
procedure remj_copy_order_ptn_stats (
a_job_id number);
Parameter:
• a_job_id: Specifies the ID of the job to remove.
Exception:
• ORA-20155: Job does not exist.
7-17
8
Backing Up and Restoring OSM Files and
Data
This chapter helps you understand how Oracle Communications Order and Service
Management (OSM) is related to the Oracle Database backup and restore procedures.
8-1
Chapter 8
Oracle Database Backup Considerations
RMAN Considerations
Recovery Manager (RMAN) is an Oracle Database utility that backs up, restores, and
recovers Oracle databases. It backs up individual datafiles, and provides complete and
incremental backup options. Following are some issues you should consider for using
RMAN:
• Because it backs up datafiles, this method is most appropriate for use when OSM
is not sharing any tablespaces with other applications. If OSM is sharing its
tablespaces with other applications, they will be backed up at the same time. This
means that if the OSM data is restored, the information for any other applications
will be restored as well. This may not be desired.
• You should back up all of the permanent tablespaces that you have defined for
OSM. For example, if you have different tablespaces for data and indexes, you
should remember to back up both of them.
• RMAN may be slower than Flashback. This might be an issue in a large
production environment.
8-2
Chapter 8
Oracle Database Backup Considerations
takes to prepare for a purge or upgrade and reduces the time it takes to recover from
purge or upgrade failures.
• Taking a database snapshot for testing an upgrade procedure, troubleshooting a
problem, and so on, becomes much less time consuming.
8-3
9
Monitoring and Managing OSM
This chapter describes how to monitor and manage the Oracle Communications Order and
Service Management (OSM) system using the Oracle WebLogic Server Administration
Console.
This chapter provides guidelines and best practices for monitoring an OSM deployment. This
includes functional monitoring of particular orders or processes, and performance monitoring
to assist in tuning. In order to effectively monitor OSM, you require a broad knowledge of
many components, such as the OSM Managed Server, the Java Virtual Machine (JVM), the
WebLogic Server, and the Oracle Database.
9-1
Chapter 9
Refreshing OSM Metadata
When started, the WebLogic Server Administration Console prompts for a password.
This should be the password for a user that is a member of the Administrators group
in WebLogic. One such user is the WebLogic administration user that was created
when the domain was created. By default, the name of this user is weblogic.
After you have successfully logged in, the WebLogic console Home window is
displayed.
9-2
Chapter 9
Monitoring and Analyzing Performance
Note:
For more information about tuning OSM production systems, see OSM Installation
Guide.
9-3
Chapter 9
Managing Logs
script. For more information, see "Tools for Performance Testing, Tuning, and
Troubleshooting".
Managing Logs
For details about managing logs in OSM cloud native, see "Exploring Alternate
Configuration Options" in OSM Cloud Native Deployment Guide.
9-4
Chapter 9
Using JMS Queues to Send Messages
Note:
In an OSM clustered environment, you must use JMS queues as a JMS destination
to receive JMS events. Do not use JMS topics in an OSM clustered environment.
When OSM sends data to an external system, such as UIM or ASAP, it does so by sending
JMS messages to the appropriate JMS request queue of an external system.
If the external system is not processing the requests from OSM, the queues get backlogged.
It is important to be able to monitor the size of the JMS queues in order to know whether or
not they are backing up.
To monitor the JMS queues:
1. Login to the WebLogic Administration Console
Click Services/Messaging/JMS Servers/oms_jms_server.
The General Configuration page is displayed.
2. Click the Monitoring tab and then click Active Destinations.
A list of active destinations targeted to the server is displayed.
Note:
The default view of the table does not contain the Consumers Current column.
We recommend that you customize the table using the Customize link to
include this column, along with any other customizations you may want to
make.
The Consumers Column column defines the current number of listeners on the destination.
If a destination does not have any listeners, the external system does not receive the
messages.
The Messages Current column defines the current number of unprocessed messages in the
JMS destination. A large number of messages (for example, 10,000) in this destination is a
problem. It means that the system is not keeping up, that the messages are not getting
processed, or that the messages are getting processed but errors are occurring and the
messages are getting put back on the destination.
OSM has the following JMS destinations present:
• oms_behavior_queue: Used for customizing task assignment
• oms_events: Internal destination used for events such as automation, notifications, and
task state changes
• oms_order_events: Used for order state changes such as OrderCreateEvent,
OrderStateChangeEvent, AmendmentStartedEvent, OrderCancelledEvent
• oms_order_updates: Internal destination used for processing amendments
• oms_signal_topic: Internal destination used to trigger a metadata refresh
9-5
Chapter 9
Using Work Managers to Prioritize Work
Note:
The important columns are Consumers, Messages, and Messages
Received.
If the number in the messages column for these queues continues to grow, the
external system may not be processing the messages sent by OSM. You must check
to see if the external system is working properly.
If the number of consumers for the queues is 0, the external system may not have
configured its listeners properly. Check to see if the external system is configured
properly.
9-6
Chapter 9
Using Work Managers to Prioritize Work
9-7
Chapter 9
Overriding the Internet Explorer Language in the OMS Web Clients
For details on configuring these values, see "Working with Shapes" in OSM Cloud
Native Deployment Guide.
9-8
10
Exporting and Importing OSM Schema Data
This chapter provides information about how to selectively export and import schema data,
which include orders and model data (cartridge data), from an Oracle Communications Order
and Service Management (OSM) database.
Note:
The utilities do not provide an effective means of backing up and restoring database
data. For more information about how to do this, see "Backing Up and Restoring
OSM Files and Data ".
You can follow several scenarios to export and import OSM data, depending on the reason
and the type of data that you need.
• Exporting and Importing the OSM Model Data Only
• Exporting and Importing the OSM Model and a Single Order
• Exporting and Importing a Range of Orders and the OSM Model
• Exporting and Importing a Range of OSM Orders Only
10-1
Chapter 10
Exporting and Importing the OSM Model Data Only
Note:
Model tables that are system-managed, or that contain system-managed
parameters, are excluded from the export because these tables are created
by the database installer and already exist in the target schema.
COUNT(*)
----------
1
If the count that is returned is 1, the package exists and you do not need to create
it. If the count is zero, you must create the package.
Note:
If the export/import utility package does not exist in the database, you
can run the installer to create it. For information about running the
installer, see OSM Installation Guide.
When you run the OSM installer, make sure that you select the
"Database Schema" component to create the export/import utility
package.
5. Prevent extra white space in the generated PAR file by running the following
command:
set trimspool on
Extra white space in the PAR file causes the export to fail.
6. Specify the file where you want to capture the generated output by running the
following command. The database creates this file in the folder you were in when
you logged into the sqlplus session:
spool osm_expdp_model_parfile.txt
10-2
Chapter 10
Exporting and Importing the OSM Model Data Only
10. Using a text editor, remove the following lines from the osm_expdp_model_parfile.txt
file:
• exec om_exp_imp_util_pkg.generate_expdp_model_parfile
• PL/SQL procedure successfully completed.
• spool off
11. (Optional) Modify PAR file parameters as needed in osm_expdp_model_parfile.txt file.
For more information, see "Changing PAR File Parameters".
12. Export the model tables by running the following command:
expdp SourceOSMCoreSchemaUserName PARFILE=osm_expdp_model_parfile.txt
13. Print the schema data to a text file by doing the following:
c. Specify the file where you want to capture the generated output by running the
following command:
spool osm_expdp_schema_info.txt
Note:
Keep this text file with the OSM database dump (dmp) files because the file
contains information that will be used when you import the dmp files.
The Data Pump Export utility unloads data and metadata into a set of
operating system files called a dump file set, which is then imported using
the Data Pump Import utility. For more information about Oracle Data Pump
utility and the dmp files, see "Oracle Data Pump Export" in Oracle
Database Utilities.
14. Restart the OSM server you exported the data from.
10-3
Chapter 10
Exporting and Importing the OSM Model Data Only
Adding Partitions
If the target schema is partitioned and you are importing order data, you must ensure
that the order IDs that you want to import map to existing partitions. If any order ID is
greater than or equal to the greatest partition upper bound, add one or more partitions
as needed.
If the source schema uses partitioning realms (for example, for short-lived orders), the
size and order of the partitions and the partitioning realms that they are associated
with on the source system, must be duplicated on the target system. Partitions
associated with partitioning realms must be added after OSM model data is imported
because the partitioning realms are imported with OSM model data. See "Partitioning
Realms" for more information.
If the source schema has the partitions and partitioning realms that are shown in
Table 10-1, for example, the same size of partitions and realms must be created on the
target schema. For this example, you run the following commands after you create the
partitioning realms in the target schema using OSM model import:
exec om_part_maintain.add_partition(a_tablespace, 'default_order');
exec om_part_maintain.add_partition(a_tablespace, 'short_lived_orders');
where a_tablespace is the tablespace for new partitions. This procedure modifies the
default tablespace attribute of partitioned tables with the specified tablespace before
adding partitions. If you do not specify the tablespace, or if the a_tablespace value is
null, each partition is created on the default tablespace of the partitioned table (for
example, on the same tablespace as the most recently added partition).
Table 10-1 shows an example of a set of partitions that use the partitioning realms
feature.
For information about using partitioning realms, see "Partitioning Realms". For more
information about adding partitions, see "Managing the OSM Database Schema".
10-4
Chapter 10
Exporting and Importing the OSM Model Data Only
where:
(n) is the number of partitions that you want to add
(tablespace) is the tablespace for the new partition(realm_mnemonic) is the mnemonic of
the partitioning realm associated with this partition. This value is used only for schemas
that use partitioning realms. The default realm is default_order if this value is not
provided.
3. Query the user_tab_partitions table to view the newly added partitions.
select * from user_tab_partitions where table_name = 'OM_ORDER_HEADER' ;
Note:
If you are not on the latest version of your OSM release, the add_partitions
procedure might not work. In this case, see "Adding Partitions (Online or Offline)"
for information about how to add partitions.
Note:
Purging the data before importing it to the target schema ensures there are no
constraint violations when importing duplicate data.
3. Open another terminal, log in to SQL*Plus as the OSM core schema user.
4. Disable constraints and triggers, and stop jobs by running the following command:
exec om_exp_imp_util_pkg.pre_impdp_setup
This command ensures that there are no errors when importing OSM data.
5. From another terminal, import the model tables. For example:
10-5
Chapter 10
Exporting and Importing the OSM Model and a Single Order
This enables the constraints and triggers, and resubmits the jobs that were
disabled and stopped before importing the OSM data.
7. Restart the OSM server.
COUNT(*)
----------
1
If the count that is returned is 1, the package exists and you do not need to create
it.
Note:
If the export/import utility package does not exist in the database, you
can run the installer to create it. For information about running the
installer, see OSM Installation Guide.
When you run the OSM installer, make sure that you select the
"Database Schema" component to create the export/import utility
package.
10-6
Chapter 10
Exporting and Importing the OSM Model and a Single Order
3. Verify that the order that you want to export is not open by running the following SQL
commands:
set serveroutput on
exec om_exp_imp_util_pkg.print_open_order_count(a_min_order_id => orderid_min,
a_max_order_id => orderid_max);
where both orderid_min and orderid_max are the ID of the order that you want to export.
For example: a_min_order_id => 123456, a_max_order_id => 123456. For more
information, see "About Order Export Queries".
The following message is displayed:
There are no open orders
If the order specified is open and the server should be taken offline, the following
message is displayed:
The open order count is: 1
Note:
Oracle recommends that you always check for open orders before you export
order data.
4. If the order that you want to export is open, take the OSM server offline.
5. Allow the output to be generated by running the following command:
set serveroutput on
6. Prevent extra white space in the generated PAR file by running the following command:
set trimspool on
Extra white space in the PAR file causes the export to fail.
5. Modify the PAR file option query in osm_expdp_orders_parfile.txt to select the single
order that you want to export.
10-7
Chapter 10
Exporting and Importing the OSM Model and a Single Order
Note:
Model tables that are system-managed, or contain system-managed
parameters, are excluded from the export because these tables are created
by the database installer and already exist in the target schema.
Note:
The orders must match the model. Purging the data before importing it to
the target schema ensures there are no constraint violations when
importing duplicate data.
3. (Optional) Purge the existing OSM model by running the following command:
exec om_exp_imp_util_pkg.purge_model
10-8
Chapter 10
Exporting and Importing a Range of Orders and the OSM Model
Note:
Purging the data before importing it to the target schema ensures there are no
constraint violations when importing duplicate data.
Running this command ensures that there are no errors when importing OSM data.
6. Import the model data by running the following command:
impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_model%U.dmp LOGFILE=osm_impdp_model.log
REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName
REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace
For more information about these parameters, see "About Import Parameters".
7. Import order tables that define an order sequence ID by running the following command:
impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log
REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName
REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n
For more information about these parameters, see "About Import Parameters".
8. Finalize the OSM target schema by running the following command:
exec om_exp_imp_util_pkg.post_impdp_setup
This enables the constraints and triggers, and resubmits the jobs that were disabled and
stopped before importing the OSM data.
9. Restart the OSM server.
10-9
Chapter 10
Exporting and Importing a Range of Orders and the OSM Model
COUNT(*)
----------
1
If the count that is returned is 1, the package exists and you do not need to create
it.
Note:
If the export/import utility package does not exist in the database, you
can run the installer to create it. For information about running the
installer, see OSM Installation Guide.
When you run the OSM installer, make sure that you select the
Database Schema component to create the export/import utility
package.
3. Verify that none of the orders that you want to export are open by running the
following SQL commands:
set serveroutput on
exec om_exp_imp_util_pkg.print_open_order_count(a_min_order_id =>
orderid_min);
where orderid_min is the minimum bound value of a range of order IDs. For more
information, see "About Order Export Queries".
The following message is displayed:
There are no open orders
If any of the orders within the range specified are open and the server should be
taken offline, the following message is displayed:
The open order count is: n
Note:
Oracle recommends that you always check for open orders before you
export data.
4. If any of the orders that you want to export are open, take the OSM server offline.
5. Allow the output to be generated by running the following command:
set serveroutput on
6. Prevent extra white space in the generated PAR file by running the following
command:
10-10
Chapter 10
Exporting and Importing a Range of Orders and the OSM Model
set trimspool on
Extra white space in the PAR file causes the export to fail.
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_orders%U.dmp
FILESIZE=1GB
LOGFILE=osm_expdp_orders.log
CONTENT=DATA_ONLY
PARALLEL=1
QUERY=" where order_seq_id >= 100000"
TABLES=(
OM_ATTACHMENT,
OM_HIST$DATA_CHANGE_NOTIF,
OM_HIST$FALLOUT,
OM_HIST$FALLOUT_NODE_INSTANCE,
OM_HIST$FLOW,
OM_HIST$NOTIFICATION,
OM_HIST$ORDER_HEADER,
OM_HIST$ORDER_INSTANCE,
OM_HIST$ORDER_STATE
...
OM_JMS_THREAD,
OM_SYSTEM_EVENT
)
4. Run the following command, which stops capturing the generated output:
spool off
10-11
Chapter 10
Exporting and Importing a Range of Orders and the OSM Model
Note:
Model tables that are system-managed, or contain system-managed
parameters, are excluded from the export because these tables are created
by the database installer and already exist in the target schema.
Note:
The orders must match the model. Purging the data before importing it to
the target schema ensures there are no constraint violations when
importing duplicate data.
3. (Optional) Purge the existing OSM model by running the following command:
exec om_exp_imp_util_pkg.purge_model
Note:
Purging the data before importing it to the target schema ensures there
are no constraint violations when importing duplicate data.
10-12
Chapter 10
Exporting and Importing a Range of OSM Orders Only
exec om_exp_imp_util_pkg.pre_impdp_setup
Running this command ensures that there are no errors when importing OSM data.
6. Import order tables that use a different name for the order ID column by running the
following command:
impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_model%U.dmp LOGFILE=osm_impdp_model.log
REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName
REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace
For more information about these parameters, see "About Import Parameters".
7. Import order tables that define an order sequence ID by running the following command:
impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log
REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName
REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n
For more information about these parameters, see "About Import Parameters".
8. Finalize the OSM target schema by running the following command:
exec om_exp_imp_util_pkg.post_impdp_setup
This enables the constraints and triggers, and resubmits the jobs that were disabled and
stopped before importing the OSM data.
9. Restart the OSM server.
Note:
This section does not provide information about importing OSM model data to a
target schema. If you want to do that, see "Importing OSM Model Data."
10-13
Chapter 10
Exporting and Importing a Range of OSM Orders Only
COUNT(*)
----------
1
If the count that is returned is 1, the package exists and you do not need to create
it.
Note:
If the export/import utility package does not exist in the database, you
can run the installer to create it. For information about running the
installer, see OSM Installation Guide.
When you run the OSM installer, make sure that you select the
Database Schema component to create the export/import utility
package.
3. Verify that the orders that you want to export are not open by running the following
SQL commands:
set serveroutput on
exec om_exp_imp_util_pkg.print_open_order_count(a_min_order_id =>
orderid_min);
where orderid_min is the minimum bound value of a range of order IDs. For more
information, see "About Order Export Queries".
If the orders within the range specified are not open and the server does not need
to be taken offline, the following message is displayed:
There are no open orders
If any of the orders within the range specified are open and the server should be
taken offline, the following message is displayed:
The open order count is: n
Note:
Oracle recommends that you always check for open orders before you
export data.
4. If any of the orders that you want to export are open, take the OSM server offline.
5. Allow the output to be generated by running the following command:
set serveroutput on
10-14
Chapter 10
Exporting and Importing a Range of OSM Orders Only
6. Prevents extra white space in the generated PAR file by running the following command:
set trimspool on
Extra white space in the PAR file causes the export to fail.
Exporting a Range of Orders from Order Tables That Define an Order Sequence ID
To export a range from order tables that define an order sequence ID:
1. Follow all the steps of the procedure in "Preparing to Export Order Tables for a Range of
Orders".
2. Specify the file where you want to capture the generated output by running the following
command:
spool osm_expdp_orders_parfile.txt
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_orders%U.dmp
FILESIZE=1GB
LOGFILE=osm_expdp_orders.log
CONTENT=DATA_ONLY
PARALLEL=1
QUERY=" where order_seq_id between 100000 and 200000"
TABLES=(
OM_ATTACHMENT,
OM_HIST$DATA_CHANGE_NOTIF,
OM_HIST$FALLOUT,
OM_HIST$FALLOUT_NODE_INSTANCE,
OM_HIST$FLOW,
OM_HIST$NOTIFICATION,
OM_HIST$ORDER_HEADER,
OM_HIST$ORDER_INSTANCE,
OM_HIST$ORDER_STATE
...
OM_JMS_THREAD,
OM_SYSTEM_EVENT
)
10-15
Chapter 10
About Order Export Queries
Running this command ensures that there are no errors when importing OSM
data.
4. Import order tables that use a different name for the order ID column by running
the following command:
impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log
REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName
REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n
For more information about these parameters, see "About Import Parameters".
5. Import order tables that define an order sequence ID by running the following
command:
impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR
DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log
REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName
REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n
For more information about these parameters, see "About Import Parameters".
6. Finalize the target OSM schema by running the following command:
exec om_exp_imp_util_pkg.post_impdp_setup
This enables the constraints and triggers, and resubmits the jobs that were
disabled and stopped before importing the OSM data.
7. Restart the OSM server.
10-16
Chapter 10
Changing PAR File Parameters
Note:
There are other parameters in the PAR file but if you change these, the export will
not be successful.
For more information about the parameters that are available in the command line mode of
the data pump export, see "Oracle Data Pump Export" in Oracle Database Utilities.
10-17
Chapter 10
Changing PAR File Parameters
where:
• pdb_variable_name is the name of the PDB
directory variable
• path is the path to the data pump directory
(for example /samplepath/pdbdatapumpdir)
• osm_core_schema is the core OSM
schema (for example ordermgmt)
DUMPFILE osm_expdp_model%U.dmp Specifies the name and, optionally, the directory
of objects of dump files for an export job.
FILESIZE 1 GB Specifies the maximum size of each dump file.
INCLUDE N/A Enables you to filter the metadata that is
exported by specifying objects and object types
for the current export mode.
Note: If you change this parameter in any of the
export scenarios in this chapter, the export will
not be successful.
LOGFILE osm_expdp_model.log Specifies the name and, optionally, the directory
for the log file of the export job.
PARALLEL 1 Specifies the maximum number of processes of
active execution operating on behalf of the
export job.
10-18
Chapter 10
About Import Parameters
where:
• pdb_variable_name is the name of the PDB directory variable
• path is the path to the data pump directory (for example /samplepath/
pdbdatapumpdir)
• osm_core_schema is the core OSM schema (for example ordermgmt)
DUMPFILE Specifies the name and, optionally, the directory of objects of dump files
for an import job.
LOGFILE Specifies the name and, optionally, the directory for the log file of the
import job.
10-19
Chapter 10
Troubleshooting Export/Import
REMAP_SCHEMA This parameter specifies the source schema and the target schema from
which all objects are loaded to.
When importing, if the source and target schema are the same, the
REMAP_SCHEMA option does not need to be specified.
REMAP_TABLESPACE This parameter remaps all objects selected for import with persistent data
in the source tablespace to be created in the target tablespace.
When importing, if the source and target schema are the same, the
REMAP_TABLESPACE option does not need to be specified.
CLUSTER This parameter is not available in the generated PAR file. This parameter
determines whether Data Pump can use Oracle Real Application Cluster's
(Oracle RAC) resources and start workers on other Oracle RAC instances.
To force Data Pump Import to use only the instance where the job is
started, add CLUSTER=N in the PAR file. Otherwise, ignore this
parameter. In Oracle database for import, the default value is Y.
Troubleshooting Export/Import
Errors might occur during the process of exporting or importing data.
Table 10-5 lists some common export errors and their solutions. For more information
about troubleshooting errors that might occur when exporting or importing data, see
the troubleshooting section of the knowledge article about exporting and importing
data [Doc ID 1594152.1], available from the Oracle support website:
https://support.oracle.com
For information about Oracle RAC and data pump import, see "Oracle RAC
Considerations" and "Oracle Data Pump Import" in Oracle Database Utilities.
For information about predefined roles in an Oracle Database installation, and about
guidelines for securing user and accounts privileges, see Oracle Database Security
Guide.
10-20
Chapter 10
Troubleshooting Export/Import
UDE-00014: invalid value for The include parameter used in the As a workaround, run the following
parameter, 'include' export options PAR file contains more command in SQL Plus before
than 4,000 characters. This is generating the options PAR files:
normally due to extra white space at
SQL> SET TRIMSPOOL ON
the end of each line when the file is
spooled.
ORA-39002: invalid operation The location specified for the export Update the DIRECTORY option
ORA-39070: Unable to open the log DIRECTORY option is not accessible. specified in the export command.
file. For exports or imports performed in
ORA-29283: invalid file operation an Oracle RAC environment using
Automatic Storage Management,
ORA-06512: at "SYS.UTL_FILE", line
the DIRECTORY option should be
536
updated to point to the shared
ORA-29283: invalid file operation location.
For more information, see the Data
Pump: Oracle RAC Considerations
section of Oracle Database
documentation.
ORA-39001: invalid argument value A previously generated version of the Remove the previously generated
ORA-39000: bad dump file dmp file already exists. version of the dmp file.
specification or
ORA-31641: unable to create dump Specify the following option in the
file "+DATA/osm_expdp_model.dmp" export PAR file to overwrite it:
ORA-17502: ksfdcre:4 Failed to REUSE_DUMPFILES=YES
create file +DATA/
osm_expdp_model.dmp
ORA-15005: name
"osm_expdp_model.dmp" is already
used by an existing alias
10-21
Chapter 10
Troubleshooting Export/Import
Table 10-6 lists some common import errors and their solutions.
10-22
Chapter 10
Troubleshooting Export/Import
SQL> exec
om_exp_imp_util_pkg.purge_sche
ma
ORA-39001: invalid argument value The same source tablespace has This might happen when the OSM
ORA-39046: Metadata remap been specified more than once for Core and Rule Engine schema use
REMAP_TABLESPACE has already the REMAP_TABLESPACE option. the same tablespace. In this case,
been specified. you need to specify the
REMAP_TABLESPACE for this
tablespace only once.
ORA-00932: inconsistent datatypes: There is a known issue with data Follow the steps outlined in the
expected OM_T_ORCH_PROCESS pump import that causes imports with scenario "Exporting and Importing a
got OM_T_ORCH_PROCESS REMAP_SCHEMA and TYPE Range of Orders and the OSM
definitions to generate this error. Model". When generating the order
PAR files, select the option to export
all orders, that is,
order_target_seq_id > 0.
UDI-31626: operation generated This is an issue with data pump Apply Patch 13715680
ORACLE error 31626 import privileges. Or
ORA-31626: job does not exist For more information, see the Follow the workaround in the notes
ORA-39086: cannot retrieve job knowledge article about the issue of the associated bug to add the
information [Doc ID 1459430.1], available from missing privileges. For more
the Oracle support website: information, see bug 13715680 on
ORA-06512: at
"SYS.DBMS_DATAPUMP", line 3326 https://support.oracle.com the Oracle support website.
ORA-06512: at https://support.oracle.com
"SYS.DBMS_DATAPUMP", line 4551 The missing privileges are:
ORA-06512: at line 1 SQL> GRANT lock any table TO
datapump_imp_full_database;
SQL> GRANT alter any index TO
datapump_imp_full_database;
10-23
Chapter 10
Troubleshooting Export/Import
10-24
11
Configuring Time Zone Settings
This chapter describes how to configure time zone settings in Oracle Communications Order
and Service Management (OSM). This is an optional configuration task.
11-1
12
Troubleshooting OSM
This chapter provides guidelines to help you troubleshoot problems with your Oracle
Communications Order and Service Management (OSM) system.
12-1
Chapter 12
Diagnosing Some Common Problems with OSM
• Is the system otherwise operating normally? Has response time or the level of
system resources changed? Are users complaining about additional or different
problems?
Note:
The procedures below set the value to 2 MB. This is a suggested value to
start with, but you should adjust the value if necessary, according to your
needs.
12-2
Chapter 12
Diagnosing Some Common Problems with OSM
In your instance, project or shape specification file, add or append the following parameter
and adjust the value as necessary:
shape:
user_mem_args: "-Xss2m"
12-3
Chapter 12
Diagnosing Some Common Problems with OSM
<Message-Driven EJB:
YourCartridgeName_1.0.0.0.0_YourPluginName_orderCompleteEventMDB's transaction
was rolled back. The transaction details are: ....
OSM does not support JMS topics within an OSM clustered environment. For more
information about OSM queue configuration, see the discussion of OSM integration
with external systems in OSM Installation Guide.
12-4
Chapter 12
Diagnosing Some Common Problems with OSM
Quick Fix Button Active During Order Template Conflicts in Design Studio
Conflicts can occur when order templates are created in Design Studio. Presently, Quick Fix
does not work for order template conflicts, even if the Quick Fix button is active. All order
template conflicts must be resolved manually.
12-5
Chapter 12
Getting Help with OSM Problems
export JAVA_OPTIONS="${JAVA_OPTIONS} -
Doracle.communications.ordermanagement.orchestration.generation.model.C
oncurrencyLevel=2
This error occurs because there are non-empty exchange tables that are created by
the failed purge operation that you performed the first time.
To resolve this issue, you must purge the exchange tables manually before you retry
purging.
Reporting Problems
If "General Checklist for Resolving Problems" does not help you to resolve the
problem, write down the pertinent information:
• A clear and concise description of the problem, including when it began to occur.
• Relevant portions of the log files.
12-6
Chapter 12
Getting Help with OSM Problems
• Recent changes in your system, even if you do not think they are relevant.
• List of all the OSM components and patches installed on your system.
• Have ready all specification files (project, instance and shape) used to create the OSM
instance.
When you are ready, report the problem to Oracle.
12-7
13
OSM Log Messages
This chapter details the Oracle Communications Order and Service Management (OSM) log
messages. The sections included in this chapter are:
• OSM Catalog Messages
• Automation Catalog Messages
13-1
Chapter 13
OSM Catalog Messages
13-2
Chapter 13
OSM Catalog Messages
13-3
Chapter 13
OSM Catalog Messages
13-4
Chapter 13
OSM Catalog Messages
13-5
Chapter 13
OSM Catalog Messages
13-6
Chapter 13
OSM Catalog Messages
13-7
Chapter 13
OSM Catalog Messages
13-8
Chapter 13
OSM Catalog Messages
13-9
Chapter 13
OSM Catalog Messages
13-10
Chapter 13
OSM Catalog Messages
13-11
Chapter 13
OSM Catalog Messages
13-12
Chapter 13
OSM Catalog Messages
13-13
Chapter 13
OSM Catalog Messages
13-14
Chapter 13
OSM Catalog Messages
13-15
Chapter 13
OSM Catalog Messages
13-16
Chapter 13
OSM Catalog Messages
13-17
Chapter 13
OSM Catalog Messages
13-18
Chapter 13
Automation Catalog Messages
13-19
Chapter 13
Automation Catalog Messages
13-20
Chapter 13
Automation Catalog Messages
13-21
14
Using the XML Import/Export Application
This chapter provides information about the XML Import/Export application (XMLIE), which is
used to manage data and metadata in the Oracle Communications Order and Service
Management (OSM) database schema.
Note:
Do not run the import, fastUndeploy, userAdmin and credStoreAdmin
operations using the XMLIE scripts, but instead use the new mechanism.
You can find more details in "Differences Between OSM Cloud Native and OSM Traditional
Deployments".
There are two types of information in an OSM database schema:
• Metadata: Information that defines the order model. For example, the definitions of
processes, orders, and tasks.
• Data: Information that represents orders. For example, order nodes, attributes, and
values.
Using XMLIE, you can perform actions such as import and export metadata, purge metadata
and data, and migrate data. You can also use XMLIE to validate the metadata model and to
create a graphical representation of the metadata.
Note:
Although actions such as importing and exporting metadata, purging both metadata
and data, and migrating data can be done using XMLIE, Oracle Communications
Service Catalog and Design - Design Studio is the preferred application for running
these functions.
XMLIE can work with a localized database, but the application must also be localized. See
OSM Developer's Guide for information on localizing OSM, including localizing XMLIE.
If you are running the OSM application on a UNIX or Oracle Linux platform, you must run
XMLIE by using a set of Ant scripts. If you are running the OSM application on a Windows
platform, you must run XMLIE by using a set of batch scripts.
14-1
Chapter 14
About Using the XML Import/Export Application
Note:
The SDK/XMLImportExport/config/config_sample.xml file is a sample
XMLIE configuration file that can be used as a template for the
config.xml configuration file.
The config.xml file name is arbitrary. If you customize the name of the
config.xml file, ensure that you substitute the customized name
wherever you must specify the config.xml file (for example, when using
the import and export commands in the import.bat and export.bat
scripts).
This chapter uses the default config.xml file name in all examples.
14-2
Chapter 14
About XML Import/Export Batch Scripts and Ant Commands
Note:
This chapter uses xmlModelFile as the documentation placeholder name for
this file.
Note:
This command is deprecated. Use the ant import command instead because it
automatically upgrades models during an import.
Note:
Do not use this command. Instead, use the mechanism described in
"Differences Between OSM Cloud Native and OSM Traditional Deployments".
Before you can run these Ant commands, you must configure the Ant environment
build.properties file and the config.xml file. For more information see "Configuring the XML
Import/Export Environment Files" and "Configuring the config.xml File XML Import/Export
Nodes and Elements."
You can use the following scripts to encrypt passwords:
• EncryptPasswords.sh: This encrypts passwords in the config.xml file. For more
information, see "Using the EncryptPasswords Utility."
14-3
Chapter 14
About XML Import/Export Batch Scripts and Ant Commands
Note:
This script is deprecated. Use the import.bat script instead because it
automatically upgrades models during an import.
Note:
Do not use this script. Instead, use the mechanism described in
"Differences Between OSM Cloud Native and OSM Traditional
Deployments".
14-4
Chapter 14
Configuring the XML Import/Export Environment Files
Note:
Passing an unencrypted password as a command line argument to the XML
Import/Export tool scripts is no longer possible. The -p db_password and -
clientpassword xmlAPI_password arguments, which were previously
deprecated, have now been removed. You must either use encrypted
passwords in the config.xml file (using the EncryptPassword utility) or
interactively provide the unencrypted password when prompted. For security
recommendations, see "Configuring the config.xml File XML Import/Export
Nodes and Elements."
Before you can run these scripts, you must configure the environment config.bat script and
the config.xml file. For more information see "Configuring the XML Import/Export
Environment Files" and "Configuring the config.xml File XML Import/Export Nodes and
Elements."
where:
• MW_home is the location where Oracle Fusion Middleware was installed
• java_max_memory is the maximum heap to be used by JVM
• root.dir is the path of XMLIE directory
• model_document is the path for XML model document
• config_document is the path for config.xml
• namespace is the OSM cartridge namespace
• version is the cartridge version
14-5
Chapter 14
Configuring the config.xml File XML Import/Export Nodes and Elements
Note:
You can also refer to the README.txt file found under SDK/
XMLImportExport for information on configuring the
build.properties file for Ant scripts.
For example:
middlewareHome=C:/oracle/middleware
java.maxmemory=512m
xmlie.root.dir=./
xmlie.root.modelDocument=./data.xml
xmlie.root.configDocument=./config/config.xml
xmlie.root.namespace=test
xmlie.root.version=4.0
xmlie.root.htmlDir=./htmlModel
3. If you want to configure order purge parameters in the build.properties file, see
"Running Ant with the orderPurge.xml file On UNIX or Linux to Purge Orders ."
4. If you want to configure cartridge migration parameters in the build.properties
file, see "Configuring and Running an Order Migration."
Note:
The sample_config.xml file contains references to absolute paths that
start with "C:\". Be sure to configure these paths to reflect your
environment.
3. Configure the database connection node in the config.xml file. This node is
required for most Ant commands and batch scripts.
<databaseConnection>
<user>osm_schema_username</user>
<password>osm_schema_password</password>
<dataSource>jdbc:oracle:thin:@ip_address:osm_db_port:osm_db_sid</
14-6
Chapter 14
Configuring the config.xml File XML Import/Export Nodes and Elements
dataSource>
</databaseConnection>
where:
• osm_schema_username and osm_schema_password are the OSM schema user
name and password.
• ip_address, osm_db_port, and osm_db_sid are the OSM database IP address, port
number, and SID.
Note:
To export or import the symbols, change the character set and database
connection to something like the following sample:
<encoding>UTF-8</encoding>
<dataSource> jdbc:oracle:oci:
(description=(address=(host=host1.example.com)(protocol=tcp)(port=1521))
(connect_data=(SID=ORASID)))
</dataSource>
4. Configure the XML API connection node to specify the connection information to be used
by operations that utilize the XML API. For example, migration Ant commands or batch
scripts require this node.
<xmlAPIConnection>
<user>weblogic_username</user>
<password>weblogic_password</password>
<url>http://ip_address:port</url>
</xmlAPIConnection>
where
• weblogic_username and weblogic_password are the user name and password of the
user performing the migration. This user must have access to all source orders being
migrated and to target order's type/source order entry task (if closeSource set to true,
see "About Migrating Orders" for details).
• ip_address, and port are the WebLogic Administration server IP address and port
number.
5. Configure the WebLogic Administrator credentials and connection information node. This
node is required for Ant commands and batch scripts that modify OSM user credentials:
<j2eeAdminConnection>
<j2eeServiceName>weblogic</j2eeServiceName>
<user>weblogic_username</user>
<password>weblogic_password</password>
<protocol>protocol</protocol>
<hostname>ip_address</hostname>
<port>port</port>
</j2eeAdminConnection>
where
• weblogic_username and weblogic_password are the user name and password of the
WebLogic with Administrator privileges.
14-7
Chapter 14
Configuring the config.xml File XML Import/Export Nodes and Elements
where:
• filename_path is the path to the log file that includes the file name of the log
file.
• boolean can be true or false. If set to true (default), XMLIE overwrites the log
file every time the application starts. If set to false, XMLIE creates a
cumulative log by saving the log from session to session and adding new
messages to it.
7. Do one of the following:
• For exporting metadata from an OSM database, configure the export node.
See "About Exporting Metadata."
• For importing metadata to the OSM database, configure the import node. See
"About Importing Metadata".
• For purging metadata and order data in an OSM database, configure the
purge node. See "About Purging MetaData and Data."
• For migrating orders between OSM cartridges, configure the migrate node.
See "About Migrating Orders."
• For validation Metadata from an existing XML file, configure the validation
node. See "About Validating the Metadata Model and Data."
• For creating a graphical HTML representation see "About Creating a Graphical
Representation of the Metadata Model."
8. (Optional) If XMLIE is to be run unattended, secure the EncryptPassword utility
and the configuration file that contains user credentials.
For enhanced security, each of the XMLIE operations that require user passwords
prompts you for those passwords during invocation. If XMLIE is to be run
unattended, you can alternatively encrypt those passwords and store them in the
XMLIE configuration file (typically config.xml).
If passwords are to be stored in the XMLIE configuration file, do the following:
a. Set the permissions of the configuration file to be readable only by select
administrative users. Refer to your OS documentation for instruction.
b. Run the EncryptPassword utility so that user name and password credentials
for all XMLIE users are encrypted for safe storage. For more information, see
"Using the EncryptPasswords Utility."
14-8
Chapter 14
About Importing and Exporting Metadata
Note:
If you plan to run XMLIE in an unattended mode, you must first run the
EncryptPasswords utility; otherwise, you cannot perform many of the
application functions and OSM gives an error indicating that you must run the
EncryptPasswords utility.
Note:
The actions that XMLIE runs are supported in Design Studio. Any metadata
changes made using XMLIE will be overwritten when deploying the same cartridge
using Design Studio.
The following sections describe import and export commands and configurations.
14-9
Chapter 14
About Importing and Exporting Metadata
Note:
A selective export exports all of the data, then it applies a filter according to
the selection. Consequently, selective exports take the same amount of time
as full exports.
In Example 14-2, the references to entities (for example, each state) is now sorted
alphabetically by name:
Example 14-2 References to Entities (states or statuses) Sorted Alphabetically
by Name:
<task name="enter_payment_information" xsi:type="genericTaskType">
<description>Enter Payment Information</description>
<state>accepted</state>
<state>completed</state>
<state>received</state>
<status>back</status>
<status>next</status>
</task>
In Example 14-3, the Import/Export application does not sort by name because the
order matters (that is, there is a logical difference). Country appears after last_name
because the designer specified country to appear after last_name:
Example 14-3 Not Sorted By Name
<masterOrderTemplate>
<dataNode element="account_information">
<dataNode element="first_name"/>
14-10
Chapter 14
About Importing and Exporting Metadata
<dataNode element="last_name"/>
<dataNode element="country">
<viewRule xsi:type="eventRuleType">
<event>value-changed</event>
<action>refresh</action>
</viewRule>
</dataNode>
14-11
Chapter 14
About Importing and Exporting Metadata
Note:
This section is applicable only if you are upgrading up to OSM 7.0 from a
previous release. SQL rules and text rules were replaced in OSM 7.0. SQL
rules are supported in Design Studio. The SQL rule is imported as a
separate file that can be edited as a text document.
The SQL based rule type in OSM can contain entity IDs (mostly node IDs), which must
be replaced with new IDs during data migration. IDs must be exposed as entity
attributes, helping the application find them in SQL based rules, and replace them with
new ones while importing them into a fresh environment. IDs in a document otherwise
serve no purpose and should be ignored. You can set this feature to false by setting
the exposeEntityID, in the config.xml file for XMLIE, to false, thereby reducing the
model document size.
Note:
This is a function of the export behavior.
Text rules can reference any entity ID. Generally, however, node IDs used in these
known patterns for other possible entity ID conversion routines require user assistance
to tokenize the rule text for proper parsing and conversion. A token suggested before
the ID must be tokenized for IDs except known patterns (all node functions and stored
procedures in om_ordinst_value_pkg package):
14-12
Chapter 14
About Importing and Exporting Metadata
/*$entityType*/
delay_flag :=
om_ordinst_value_pkg.get_node_value_like(:order_seq_id,76983,:coord_set_id);
if ( rtrim(delay_flag)='yes' ) or (val1 <= (sysdate - 2/24)) then
:rule_result := 'true';
else
:rule_result := 'false';
end if;
end;
delay_flag := om_ordinst_value_pkg.get_node_value_like(:order_seq_id,
new_id,:coord_set_id);
if ( rtrim(delay_flag)='yes' ) or (val1 <= (sysdate - 2/24)) then
:rule_result := 'true';
else
:rule_result := 'false';
end if;
end
where:
• validation: Options are:
– true: Validates the XML model before performing the export.
14-13
Chapter 14
About Importing and Exporting Metadata
– false: Does not validates the XML model before performing the export.
(default).
Caution:
Oracle recommends you do not skip the model validation, so this
parameter should always be set to true.
where
• namespace: The namespace for the cartridge.
• version: The cartridge version.
• entity: The entity you are targeting. For example, workgroup, region, and
schedule.
3. Do the following:
a. If you using Ant, run the following command:
ant export
14-14
Chapter 14
About Importing and Exporting Metadata
XMLIE provides the import command, which is used to import metadata into an OSM
database. If you import metadata, make sure that the elements you import do not conflict with
existing metadata that is part of a Design Studio OSM cartridge. Otherwise you may
encounter version conflict, overwrite existing elements, and create other discrepancies.
You can import the entire metadata database using the import node or you can use the import
selection element within an import node to specify the metadata to import based on:
• All entities or selected entities from a specific cartridge
• All cartridges in a namespace
Each OSM cartridge is uniquely identified by namespace and version, so exporting all of
the cartridges in a namespace includes all versions of the cartridge within the
namespace.
• System level parameters
By enabling selective imports, you can grant concurrent access to a single model or cartridge
for multiple developers. Using this method, developers can import just the entities on which
they are working at that moment, which gives other developers access to other entities within
the cartridge.
The import operation is performed in one transaction. Consult your Oracle Database
Administrator (DBA) for the appropriate setup for the rollback segment.
Note:
User workgroups are not part of the metadata model, so they must be re-entered
after an import.
After importing or exporting a cartridge, you must remap the e-mail notifications that were
associated with individual users. For best results, associate notifications only with
workgroups because user names differ between environments.
where:
• validation: Options are:
– true: Validates the XML model before performing the import.
– false: Does not validates the XML model before performing the import. (default)
14-15
Chapter 14
About Importing and Exporting Metadata
Caution:
Oracle recommends you do not skip the model validation, so this
parameter should always be set to true.
where
• namespace: The namespace for the cartridge.
• version: The cartridge version.
• entity: The entity you are targeting. For example, workgroup, region, and
schedule.
3. Do the following:
a. If you using Ant, run the following command:
ant import
14-16
Chapter 14
About Importing and Exporting Metadata
Note:
The workgroup definition is a system level entity that can be applied to multiple
different cartridges. If you add a new workgroup using XMLIE, make sure you do
not overwrite existing workgroup definitions.
5. When the export is complete, open the XML file containing the workgroup information
and add a new workgroupDefinition element and all child elements. For example:
<?xml version = '1.0' encoding = 'ISO-8859-2'?>
<model xmlns="http://www.metasolv.com/OMS/OrderModel/2002/06/25"
xmlns:osm="http://xmlns.oracle.com/communications/ordermanagement/model"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.metasolv.com/OMS/OrderModel/2002/06/25
file:///D:/OSMSTA~1/SDK/XMLIMP~1//models/OmsModel.xsd">
<workgroupDefinition name="newWorkgroup">
<description>newWorkgroup</description>
<permissions>
<createdVersionedOrders />
<exceptionProcessing />
<onlineReports />
<priorityModification />
<referenceNumberModification />
<searchView />
<taskAssignment />
<worklistViewer />
</permissions>
<calendar>
14-17
Chapter 14
About Importing and Exporting Metadata
<weeklyWorkHours>no_schedule</weeklyWorkHours>
<region>no_region</region>
</calendar>
</workgroupDefinition>
</model>
Note:
In this scenario, the XML model file contains only those elements that
need to be imported. If the model file contained other elements that did
not need to be imported, you can add a selection element to the import
node in the SDK\ XMLImportExport\config\config.xml file that targets
the workgroupDefinition entity in the model file.
For example:
<import validateModel="false"
nonEmptyDatabaseAction="ignore"entityConflictAction="replace">
<selection>/oms:model/oms:workgroupDefinition</selection>
</import>
Note:
This procedure assumes that you have already created the task that you
want to assign to a workgroup.
14-18
Chapter 14
About Importing and Exporting Metadata
5. When the export is complete, open the XML file containing the workgroup information
and add a new task element. For example:
<?xml version = '1.0' encoding = 'ISO-8859-2'?>
<model xmlns="http://www.metasolv.com/OMS/OrderModel/2002/06/25"
xmlns:osm="http://xmlns.oracle.com/communications/ordermanagement/model"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.metasolv.com/OMS/OrderModel/2002/06/25
file:///D:/OSMSTA~1/SDK/XMLIMP~1//models/OmsModel.xsd">
<schemaVersion>7.2.0</schemaVersion>
<version>
<label>7.2.0.0.366</label>
<majorVersion>1.0</majorVersion>
</version>
<cartridge namespace="bb_ocm_demo" version="1.0.0.0.1">
<description>BB OCM Demo</description>
<default>true</default>
<timestamp>2012-05-28T13:05:12</timestamp>
<workgroup name="demo">
<column name="Phone #">
<path>/subscriber_info/primary_phone_number</path>
</column>
<column name="Name">
<path>/subscriber_info/name</path>
</column>
<permissions>
<orderEntry>
<orderType>add_adsl_siebel</orderType>
<orderSource>add_adsl_siebel</orderSource>
</orderEntry>
<task>activate_dslam</task>
<task executionModes="do">add_adsl_siebel_creation</task>
<task>add_capacity</task>
<task>assign_port</task>
<task>demo_query</task>
<task>send_customer_survey</task>
<task>ship_modem_self_install_pkg</task>
<task>verify_adsl_service_availability</task>
<task>verify_order</task>
<task>new_task</task>
</permissions>
</workgroup>
</cartridge>
</model>
14-19
Chapter 14
About Purging MetaData and Data
Note:
In this scenario, the XML model file contains only those elements that
need to be imported. If the model file contained other elements that did
not need to be imported, you can add a selection element to the import
node in the SDK\ XMLImportExport\config\config.xml file that targets
the workgroup entity in the model file.
For example:
<import validateModel="false"
nonEmptyDatabaseAction="ignore"entityConflictAction="replace">
<selection>/oms:model/oms:cartridge[@namespace="bb_ocm_demo" and
@version="1.0.0.0.1"]/oms:workgroup</selection>
</import>
Note:
You must shut down the WebLogic Server before running the purge
command or an exception is thrown.
The purge.bat script and ant purge command are not transactional so any
unexpected failure may leave the schema in an invalid state. If this occurs,
repeat the purge.bat script or ant purge command until it completes
successfully.
14-20
Chapter 14
About Purging MetaData and Data
but cartridge metadata and order data are not purged from the database. The fast undeploy
functionality is useful in development and test environments where you need to deploy and
undeploy cartridges frequently.
When you use the undeploy functionality (where the fast_cartridge_undeploy parameter is
set to False), the cartridge is undeployed and its metadata and order data are purged from
the database, which can be time-consuming depending on how complex the cartridge is and
whether it has a significant number of orders.
For information about changing the fast_cartridge_undeploy parameter in the oms-
config.xml file, see "Configuring OSM with oms-config.xml."
To purge the entire schema (metadata and orders) or undeploy a specific cartridge using Ant
commands:
1. Consult "Online vs. Offline Maintenance" to ensure that the system is in the appropriate
state (online or offline) for the operation you intend to perform.
2. Do one of the following:
Note:
You can target an entire schema for the undeploy, forceundeploy, and purge
commands, or you can specify a namespace or a namespace and version by
configuring the following build.properties parameters:
• xmlie.root.namespace
• xmlie.root.version
For more information about these parameters, see "Configuring the
build.properties File for Ant Commands."
• To undeploy a cartridge using fast undeploy, but only if no pending orders exist, run
the following command:
ant fastUndeploy
• To undeploy a cartridge, but only if no pending orders exist, run the following
command:
ant undeploy
• To undeploy a cartridge using fast undeploy, even if pending orders exist, run the
following command:
ant forceFastUndeploy
• To undeploy a cartridge, even if pending orders exist, run the following command:
ant forceUndeploy
3. If you used the fast undeploy option and OSM was online, refresh the metadata for OSM.
See "Refreshing OSM Metadata" for more information.
To purge the entire schema (metadata and orders) or undeploy a specific cartridge using
batch scripts:
14-21
Chapter 14
About Purging MetaData and Data
1. Consult "Online vs. Offline Maintenance" to ensure that the system is in the
appropriate state (online or offline) for the operation you intend to perform.
2. Do one of the following:
• To completely purge a target schema, use the following batch script:
purge.bat config\config.xml force
If you run this script without the force attribute, the purge.bat script fails if any
pending orders exist on the cartridge you are purging. If you run this script with
the force attribute, the script purges the cartridge and all pending orders.
• To undeploy every version of a cartridge with a specified namespace, but only
if no pending orders exist, use the following batch script:
purge.bat config\config.xml -n namespace
where namespace and version are the OSM namespace and cartridge
version.
• To undeploy every version of a cartridge, run the following batch script:
purge.bat config\config.xml force -n namespace
where namespace and version are the OSM namespace and cartridge
version.
• To undeploy every version of a cartridge using fast undeploy, but only if no
pending orders exist, use the following batch script:
fastUndeploy.bat config\config.xml -n namespace
where namespace and version are the OSM namespace and cartridge
version.
• To undeploy every version of a cartridge using fast undeploy, even if pending
orders exist, use the following batch script:
fastUndeploy.bat config\config.xml force -n namespace
14-22
Chapter 14
About Purging MetaData and Data
where namespace and version are the OSM namespace and cartridge version.
3. If you used the fast undeploy option and OSM was online, refresh the metadata for OSM.
See "Refreshing OSM Metadata" for more information.
Note:
Oracle recommends dropping old partitions that contain completed orders as the
best way to purge orders (see "Dropping Partitions (Offline Only)" for more
information). If you cannot use this method because of pending orders, you can use
the XMLIE order purge script as a slower alternative.
Note:
All date parameters must be specified in the format:
yyyy-mm-ddThh24:mi:ss Z
For example: 2011-06-28T13:39:00 EST
Where:
14-23
Chapter 14
About Purging MetaData and Data
• before_purge: Use this data parameter with the order_state parameter. For
example, to purge orders completed 30 days ago, specify
order_state="COMPLETED" and purge_before=2011-06-28T13:39:00 EST
(or a date that is 30 days before the current date). Options are:
– all: All orders that were created before this date are considered for the
purge
– any closed state: All orders whose completion date is before this date are
considered for the purge.
– any open state: All orders that were created and transitioned to the state
specified before this date are considered for the purge.
If no purge_before date is specified, the date is set to 5 seconds before the
purge starts.
• order_states: An order state must be specified. Options are one or more of the
following comma separated values:
– all
– open
– not_running
– running
– not_started
– suspended
– cancelled
– compensating
– amending
– cancelling
– closed
– completed
– aborted
– in_progress
– waiting_for_revision
– waiting
– failed
For example: "not_started,completed"
• namespace: Must be specified. Valid values are either a namespace
mnemonic or ALL (applies to all cartridges). For example, to purge all orders
regardless of other conditions, specify order_state="ALL" and
namespace="ALL".
• version: If namespace is ALL, version is ignored. If namespace is specified but
no version is specified, the purge applies to all versions of the namespace.
• order_type: The order type mnemonic. If specified, only orders with this type
are purged.
14-24
Chapter 14
About Purging MetaData and Data
• order_source: The order source mnemonic. If specified, only orders with this source
are purged.
• commitCount: The number of orders to purge before committing the transaction to
the database. If specified, this can improve the performance of the order purge by
breaking it up into smaller transactions.
• order_id_upper: The order ID number used with the orderIdLessThan parameter
which specifies the exclusive upper order ID bound to purge.
• order_id_lower: The order ID number used with the orderIdGreaterOrEq parameter
which specifies the inclusive lower order ID bound to purge.
• op_number: The number of purge operations to run in the database in parallel. The
value can be a power of two of the set 1, 2, 4, 8, 32, or 64.
For example:
orderPurge.bat config/config.xml doPurge "purge_before=2011-01-01T23:59:59 EST"
"order_states=COMPLETED,NOT_STARTED" "namespace=abc" "version=1.0"
"order_type=x" "order_source=y" "commitCount=10" "orderIdLessThan=10"
"orderIdGreaterOrEq=50" "parallelism=64"
Note:
The syntax for the scheduled order purge is identical to the immediate order
purge except for the start_date and stop_date attributes.
Where the parameters match the ones in step 1, with the addition of the following:
• stop_date: The time when the purge should stop, even if all orders satisfying the
purge criteria have been purged (for example, stop the purge before peak hours). If
no stop_date is specified, the purge stops when all orders satisfying the purge criteria
have been purged.
• start_date: For scheduled purges only - the time when the purge should start (must
be later than the current time). When the start_date is reached, the purge starts
automatically. If no start_date is specified, the purge is immediate.
For example:
orderPurge ./config/config.xml schedulePurge "purge_before=2011-01-01T23:59:59
EST" "order_states=COMPLETED,NOT_STARTED" "namespace=abc" "version=1.0"
"order_type=x" "order_source=y" commitCount=10 "start_date=2007-01-01T23:59:59 EST"
3. Use the following syntax to list all scheduled order purges that have not started:
orderPurge.bat xmlConfigFile listPurges
For example:
orderPurge.bat ./config/config.xml listPurges
14-25
Chapter 14
About Purging MetaData and Data
4. Use the following syntax to remove an order purge that has not started:
orderPurge.bat xmlConfigFile removePurge "job_id"
Running Ant with the orderPurge.xml file On UNIX or Linux to Purge Orders
To purge orders from an OSM schema using Ant:
1. Open the SDK/XMLImportExport/build.properties file.
2. If you want to perform an immediate purge of some or all orders before a certain
date using the immediateOrderPurge attribute for the ant purge command, edit
the following arguments:
xmlie.orderPurge.purgeBefore=before_purge
xmlie.orderPurge.orderStates=order_states
xmlie.orderPurge.namespace=namespace
xmlie.orderPurge.version=version
xmlie.orderPurge.orderType=order_type
xmlie.orderPurge.orderSource=order_source
xmlie.orderPurge.commitCount=commitCount
xmlie.orderPurge.orderIdLessThan=order_id_upper
xmlie.orderPurge.orderIdGreaterOrEq=order_id_lower
xmlie.orderPurge.parallelism=op_number
Where
• before_purge: Use this data parameter with the order_state parameter. For
example, to purge orders completed 30 days ago, specify
order_state="COMPLETED" and purge_before=2011-06-28T13:39:00 EST
(or a date that is 30 days before the current date). Options are:
– all: All orders that were created before this date are considered for the
purge
– any closed state: All orders whose completion date is before this date are
considered for the purge.
– any open state: All orders that were created and transitioned to the state
specified before this date are considered for the purge.
If no purge_before date is specified, the date is set to 5 seconds before the
purge starts.
• order_states: An order state must be specified. Options are one or more of the
following comma separated values:
– all
– open
– not_running
– running
– not_started
– suspended
14-26
Chapter 14
About Purging MetaData and Data
– cancelled
– compensating
– amending
– cancelling
– closed
– completed
– aborted
– in_progress
– waiting_for_revision
– waiting
– failed
For example: "not_started,completed"
• namespace: Must be specified. Valid values are either a namespace mnemonic or
ALL (applies to all cartridges). For example, to purge all orders regardless of other
conditions, specify order_state="ALL" and namespace="ALL".
• version: If namespace is ALL, version is ignored. If namespace is specified but no
version is specified, the purge applies to all versions of the namespace.
• order_type: The order type mnemonic. If specified, only orders with this type are
purged.
• order_source: The order source mnemonic. If specified, only orders with this source
are purged.
• commitCount: The number of orders to purge before committing the transaction to
the database. If specified, this can improve the performance of the order purge by
breaking it up into smaller transactions.
• order_id_upper: The order ID number used with the orderIdLessThan parameter
which specifies the exclusive upper order ID bound to purge.
• order_id_lower: The order ID number used with the orderIdGreaterOrEq parameter
which specifies the inclusive lower order ID bound to purge.
• op_number: The number of purge operations to run in the database in parallel. The
value can be a power of two of the set 1, 2, 4, 8, 32, or 64.
For example:
xmlie.orderPurge.purgeBefore=2006-06-30T23:59:59 EST
xmlie.orderPurge.orderStates=COMPLETED
xmlie.orderPurge.namespace=default
xmlie.orderPurge.version=1.0.0.0.0
xmlie.orderPurge.orderType=ot
xmlie.orderPurge.orderSource=os
xmlie.orderPurge.commitCount=10
xmlie.orderPurge.orderIdLessThan=10
xmlie.orderPurge.orderIdGreaterOrEq=50
xmlie.orderPurge.parallelism=64
3. If you want to schedule a purge of some or all orders on a certain date using the
scheduleOrderPurge attribute for the ant purge command, edit the following additional
arguments:
14-27
Chapter 14
About Purging MetaData and Data
xmlie.orderPurge.startDate=start_date
xmlie.orderPurge.stopDate=stop_date
Where:
• stop_date: The time when the purge should stop, even if all orders satisfying
the purge criteria have been purged (for example, stop the purge before peak
hours). If no stop_date is specified, the purge stops when all orders satisfying
the purge criteria have been purged.
• start_date: For scheduled purges only - the time when the purge should start
(must be later than the current time). When the start_date is reached, the
purge starts automatically. If no start_date is specified, the purge is immediate.
For example:
xmlie.orderPurge.purgeBefore=2006-06-30T23:59:59 EST
xmlie.orderPurge.orderStates=COMPLETED
xmlie.orderPurge.namespace=default
xmlie.orderPurge.version=1.0.0.0.0
xmlie.orderPurge.orderType=ot
xmlie.orderPurge.orderSource=os
xmlie.orderPurge.commitCount=10
xmlie.orderPurge.orderIdLessThan=10
xmlie.orderPurge.orderIdGreaterOrEq=50
xmlie.orderPurge.parallelism=64
xmlie.orderPurge.startDate=2007-01-01T00:01:01 EST
xmlie.orderPurge.stopDate=2007-12-31T23:59:59 EST
4. If you want to remove an scheduled order purge that has not started using the
removeOrderPurge attribute for the ant purge command, edit the following
arguments:
xmlie.orderPurge.jobId=job_id
Note:
Orders that satisfy the purge criteria are purged and related details such
as the number of orders purged are logged in the XMLIE log file (as
configured in config.xml). If an error occurs, the purge stops. Errors and
exceptions are output to the command line and are logged in the log
files.
7. Use the following syntax to list all scheduled order purges that have not started:
ant listOrderPurges
14-28
Chapter 14
About Migrating Orders
8. Use the following syntax to remove an order purge that has not started:
ant removeOrderPurge
Note:
If you choose to close the source order, the Exception Processing function must
be associated with your workgroup.
Note:
Order migration should only be done within a window where no other order
processing will occur.
It is extremely important that the target order creation task data contain all of the fields in the
source order. The fields must be the same data type and have the same mnemonic and
length to be considered equal. Any field that exists in the source order but not in the target
creation task data is ignored. Fields that are defined in the target creation task data that have
no associated data in the source order remain blank.
14-29
Chapter 14
About Migrating Orders
where:
• submitTarget: Options are:
– true: Submits the target order following migration. (default)
– false: Leaves the target order in the creation task.
• closeSource: Options are:
– true: Closes the source order following migration. If you choose to close
the source order, the Exception Processing function must be associated
with your workgroup. (default)
– false: Leaves the source order unchanged by the migration operation.
• copyReference: Options are:
– true: Sets the order reference number of the target order to that of the
source order. (default)
– false: Leaves the target order reference number empty.
• copyRemarks: Options are:
– true: Copies the source order remarks and attachments to the target
order. (default
– false: Does not copy remarks and attachments.
• errorAction: Options are:
– abort: Stops processing immediately. (default)
– ignore: Attempts to migrate the next available order.
• confirmation: Used for validation while migrating, so if there are any warnings/
errors it might ask user for confirmation. Options are:
– true: Confirms the warning/error message if any.
– false: Does not confirm the warning/error messages if any.
2. If you are using batch scripts, use the following syntax to migrate an order from
one version of a cartridge to another version of the same cartridge:
14-30
Chapter 14
About Migrating Orders
where:
• namespace: Must be specified. Valid values are the namespace mnemonic.
• version: Must be specified. The versions of the source namespace and the target
namespace.
• type: The order type mnemonic. If specified, only orders with this type are migrated.
• source: The order source name. If specified, only orders with this source are
migrated.
For example:
migrate.bat config/config.xml -sourcenamespace default -sourceversion 1.0 -
targetversion 2.0
and
migrate.bat config/config.xml -sourcenamespace default -sourceversion 1.0 -
sourceordertype request for long distance -sourceordersource client care -
targetversion 2.0
where:
• namespace: Must be specified. Valid values are the namespace mnemonic.
• version: Must be specified. The versions of the source namespace and the target
namespace.
• type: The order type mnemonic. If specified, only orders with this type are
migrated.
• source: The order source name. If specified, only orders with this source are
migrated.
For example:
xmlie.root.namespace=bb_ocm_demo
xmlie.root.version=1.0.0.0.0
xmlie.root.sourceordertype=Add Order
xmlie.root.sourceordersource=Add Order
xmlie.root.targetorderversion=1.0.0.0.1
c. Use the following syntax start a migration from one version of a cartridge to another
version of a cartridge:
ant migrate
14-31
Chapter 14
About Validating the Metadata Model and Data
Note:
When you perform a validation, you must supply a well-formed model,
otherwise you may encounter undefined exceptions.
where:
• validateAgainstDB: Options are:
– true: Validates the XML document against existing orders in the database
schema to ensure it is compatible. (default)
– false: Does not perform an XML model validation against the database
schema.
• validateDocument: Options are:
– true: Validates the XML document against the OSM XML schema to
ensure it is well formed. (default)
– false: Does not perform the XML document validation.
• filename_path is the path to the validation log file that includes the file name of
the validation log file.
2. Do one of the following:
a. If you are using Ant commands, do the following:
14-32
Chapter 14
About Creating a Graphical Representation of the Metadata Model
ant validate
14-33
Chapter 14
About Creating a Graphical Representation of the Metadata Model
Note:
The modeldoc.bat script, due to external limitations, is not case
sensitive. Because of this, it does not work if two entities' documents go
into the same directory and their names are differentiated only by
capitalization, for example two tasks in the same process with names
like IsDebugOn and isDebugon.
This limitation is only for the modeldoc.bat script, not for the ant
htmlModel command, used on UNIX and Linux systems.
4. If you are using the ant htmlModel command, configure the graphiz property in
the SDK\XMLImportExport\build.xml file to specify the directory where the third-
party GraphViz software is installed. For example:
<property name="graphiz" value="./bin/ATT/Graphviz/bin"/>
where HTMLModelDirectoryPath: Specifies the path to the directory for the HTML
model files for the modeldoc.bat script.
6. If you are using the ant htmlModel command, run the following command:
ant htmlModel
Note:
To view the HTML presentation, your browser must support Adobe SVG
Viewer.
14-34
A
OSM Credential Store API Command
Reference
This appendix describes how to secure credentials for accessing external systems by using a
credential store, through the Oracle Fusion Middleware Credential Store Framework (CSF).
Oracle Communications Order and Service Management (OSM) applications, such as OSM
web clients and OSM cartridge applications, often are required to provide credential
information to gain access and log in to external systems. The credential information must be
secure and cannot be hard-coded in OSM code.
The following table lists the OSM credential store APIs and credential store-related classes:
A-1
Appendix A
OSM User Security and Credential Store API Reference Material
wrapper APIs to the CSF APIs. Use the OSM credential store APIs in your OSM-
related code that requires credential retrieval, such as in data providers and
automation plug-ins.
CredStore
Credential store object.
The credential store object is the domain credential store class which contains a single
instance of the CredentialStore object. The JpsServiceLocator APIs in CSF look up the
single instance of the CredentialStore object.
Package name: com.mslv.oms.security.credstore
Package name
com.mslv.oms.security.credstore
Attributes
Name: store
Type: oracle.security.jps.service.credstore.CredentialStore
Description: A reference object to the Java Platform Security credential store object.
Error Conditions
Improper Java Platform Security configuration can cause credential store lookup to
fail.
Usage Notes
This API can be used directly if you have your own implementation JAVA class of
"ViewRuleContext" and "AutomationContext."
A-2
Appendix A
OSM User Security and Credential Store API Reference Material
PasswordCredStore
Password credential store object.
Use com.mslv.oms.security.credstore.PasswordCredStore APIs in your JAVA classes to
retrieve user name and password from the credential store.
Package Name
com.mslv.oms.security.credstore
Attributes
• credstore
Type: CredStore
Description: A reference object to OSM credential store object.
• OSM_CREDENTIAL_MAPNAME
Type: String (static final)
Sensitive: Value is "osm"
Description: Pre-defined map name for OSM application in credential store.
• OSM_CREDENTIAL_KEYNAME_PREFIX
Type: String (static final)
Sensitive: Value is "osmUser_"
Description: Prefix of key names used for OSM users in credential store.
Input Parameters
mapName
Type: String
Description: Map name of the stored password credential object
keyName
Type: String
Description: Key name of the stored password credential object
Operation Outputs
passwordCredential
Type: PasswordCredential
Description: An object of oracle.security.jps.service.credstore.PasswordCredential, which
contains credential information stored under map and key name pair.
A-3
Appendix A
OSM User Security and Credential Store API Reference Material
Input Parameters
mapName
Type: String
Description: Map name of the stored password credential object
keyName
Type: String
Description: Key name of the stored password credential object
Operation Outputs
Type: String
Description: A string contains user name and password information stored under map
and key name pair. Format is "user name/password".
Input Parameters
username
Type: String
Description: OSM user name.
Operation Outputs
Type: String
Description: A string contains password value for specified OSM user. OSM user
name and password values are stored under credential store with map value
OSM_CREDENTIAL_MAPNAME, and key value starts with
OSM_CREDENTIAL_KEYNAME_PREFIX, following with user name.
Input Parameters
mapName
Type: String
Description: Map name of the stored password credential object
keyName
Type: String
Description: Key name of the stored password credential object
Operation Outputs
Type: org.w3c.dom.Element
Description: An element that contains user name and password information stored
under map and key name pair.
Output of Methods
A-4
Appendix A
OSM User Security and Credential Store API Reference Material
Error Conditions
Improper Java Platform Security configuration can cause "read" operation on the credential
store to fail due to "no permission" error. Incorrect map and key names can cause "no
credential found" problem.
Usage Notes
This API can be used directly if you have your own implementation JAVA class of
"ViewRuleContext" and "AutomationContext."
Example: Retrieve Password from OSM Default Map Given User Name
PasswordCredStore pwdCredStore;
try {
pwdCredStore = new PasswordCredStore();
return pwdCredStore.getOsmCredentialPassword(username);
} catch (final Exception e) {
throw new AutomationException("Fail to find password credential with
specified map and key name.", e);
}
Example: Retrieve Password from Custom Map Given Map and Key Names Used to
Store the Credentials
PasswordCredStore pwdCredStore;
try {
pwdCredStore = new PasswordCredStore();
return pwdCredStore.getCredentialAsXML(map, key);
} catch (final Exception e) {
throw new AutomationException("Fail to find password credential with
specified map and key name.", e);
}
CredStoreException
Credential store exception object.
Package Name
com.mslv.oms.security.credstore
Attributes
Name: target
Type: Exception
Description: Target exception is the original exception caught in the three OSM credential
store classes: CredStore, PasswordCredStore, JPSPasswordCredential.
A-5
Appendix A
OSM User Security and Credential Store API Reference Material
Operation Outputs
exception
Type: Exception
Usage Notes
This API can be used directly if you have your own implementation JAVA class of
"ViewRuleContext" and "AutomationContext."
SoapAdapter
Use the attributes for the credential store when you define data provider instances in
your cartridges.
For detailed information on data provider adapters, see the discussion on behaviors
"Modeling Behaviors" in OSM Modeling Guide.
Description
Built-in adapter.
Attributes
• CREDENTIAL_MAPNAME_PARAM
Type: String
Description: Defines the parameter name to be specified in data provider for
SOAP. A constant with value "oms:credentials.mapname".
• CREDENTIAL_KEYNAME_PARAM
Type: String
Description: Defines the parameter name to be specified in data provider for
SOAP. A constant with value "oms:credentials.keyname".
Business Logic
The business logic for retrieveInstance is as follows:
• If "oms:credentials.username" is provided in parameters:
If "oms:credentials.password" is also provided in parameter, then input values are
used directly.
A-6
Appendix A
OSM User Security and Credential Store API Reference Material
Error Conditions
Invalid map and key names can cause credential lookup to return a "null" object.
Message text is "Password credential with map name %s and key name %s does not exist in
the credential store."
Usage Notes
Do not use operation APIs directly in this object.
ObjectelHTTPAdapter
Use the attributes for the credential store when you define data provider instances in your
cartridges.
For detailed information on data provider adapters, see "Modeling Behaviors" in OSM
Modeling Guide.
Description
Built-in adapter. Objectel HTTP adapter.
Attributes
• CREDENTIAL_MAPNAME_PARAM
Type: String
Description: Defines the parameter name to be specified in data provider for Objectel
HTTP type. A constant with value "obj:mapname".
• CREDENTIAL_KEYNAME_PARAM
Type: String
Description: Defines the parameter name to be specified in data provider for Objectel
HTTP type. A constant with value "obj:keyname".
• mapname
Type: String
Description: Value specified for map name parameter.
• keyname
Type: String
Description: Value specified for key name parameter.
A-7
Appendix A
OSM User Security and Credential Store API Reference Material
Input Parameters
Context
Type: ViewRuleContext
Business Logic
The business logic for sendCommand is as follows:
• If "obj.user_name" is provided in the parameters:
If "obj:password" is also provided in the parameter, then input values are used
directly.
If "obj:password" is not provided in the parameter, call context API
"getOsmCredentialPassword(username)" to retrieve password value from the
credential store and use it in the SOAP request.
• Otherwise, if "obj:mapname" and "obj:keyname:" are provided in parameters, call
context API "getCredential(mapname, keyname)" to retrieve user name and
password and use them in the SOAP request (after the command, the code will
send a SOAP message via HTTP to the specified URL).
Usage Notes
Do not use operation APIs directly in this object.
Error Conditions
Invalid map and key names can cause credential lookup to return a "null" object.
Message name: ViewRuleFailedException
Message text: "Password credential with map name %s and key name %s does not
exist in the credential store."
ViewRuleContext
Use operation APIs defined in this interface object for the credential store.
Description
Interface object.
A-8
Appendix A
OSM User Security and Credential Store API Reference Material
Input Parameters
map
Type: String
Description: Map name
key
Type: String
Description: Key name
Operation Outputs
Type: String
Description: A string contains user name and password information stored under map and
key name pair. Format is "user name/password".
Input Parameters
username
Type: String
Description: OSM user name.
Operation Outputs
Type: String
Description: Return password value for specified OSM user. OSM user name and password
values are stored under credential store with map value OSM_CREDENTIAL_MAPNAME,
A-9
Appendix A
OSM User Security and Credential Store API Reference Material
Error Conditions
Improper Java Platform Security configuration can cause creation of
PasswordCredStore to fail.
Message Name: ViewRuleFailedException
Message Text: "Fail to create PasswordCredStore."
Usage Notes
This API is often used in XQuery scripts.
AutomationContext
Use operation APIs from AutomationContext interface to retrieve credentials in XQuery
code for automation tasks.
See "Example: Retrieve Password from OSM Default Map Given User Name."
See "Example: Retrieve Password from Custom Map Given Map and Key Names
Used to Store the Credentials."
Description
Interface object.
Input Parameters
map
Type: String
Description: Map name
key
Type: String
Description: Key name
Operation Outputs
Type: org.w3c.dom.Element
Description: An element that contains user name and password information stored
under map and key name pair.
Details on operation getCredentialAsXML():
/**
* Get user name and password values in XML format given map and key values of
* the credential.
*
* @param map
A-10
Appendix A
OSM User Security and Credential Store API Reference Material
Input Parameters
username
Type: String
Description: OSM user name.
Operation Outputs
Type: String
Description: Password value for specified OSM user. OSM user name and password values
are stored under credential store with map value OSM_CREDENTIAL_MAPNAME, and key
value starts with OSM_CREDENTIAL_KEYNAME_PREFIX, following with user name.
Error Conditions
Fail to read credential store due to improper Java Platform Security configuration or invalid
map and key names.
Message Name: AutomationException
Message Text: "Fail to create PasswordCredStore. Password credential with map name %s
and key name %s does not exist in the credential store."
Example: Retrieve Password from OSM Default Map Given User Name
declare variable $context external;
let $osmPwd := context:getOsmCredentialPassword($context, $username)
A-11
Appendix A
OSM User Security and Credential Store API Reference Material
Example: Retrieve Password from Custom Map Given Map and Key Names Used
to Store the Credential
Note:
This example assumes your map name is (osmTest).
A-12
B
Tools for Performance Testing, Tuning, and
Troubleshooting
This appendix presents information about the tools that are available for performance testing,
tuning, and troubleshooting for your Oracle Communications Order and Service Management
(OSM) system.
SoapUI
SoapUI is a tool that you can use to submit an XML order to a run-time OSM environment.
Doing this confirms that OSM is able to receive and respond to order requests. In this case,
you can submit test orders associated with a test cartridge.
When submitting sample orders to run-time environments, the root level of the sample order
XML document must be either a CreateOrder or CreateOrderBySpec request.
For more information about submitting orders with SoapUI, see OSM Developer's Guide.
For more information about SoapUI, and to download the software, see the following website:
http://www.soapui.org/
Design Studio
Design Studio is an Eclipse-based design environment for OSM cartridge development. The
Cartridge Management view displays a list of the cartridges that are deployed to your OSM
system. The Deployed Versions table lists which cartridge version and build combination is
currently deployed in the target environment for the selected cartridge.
For more information, see Design Studio Concepts.
B-1
Appendix B
GCViewer
GCViewer
GCViewer is a free open source tool to view data that is produced by verbose garbage
collection. For more information about GCViewer versions and to download, see the
following websites:
http://www.tagtraum.com/gcviewer.html
https://github.com/chewiebug/GCViewer
http://sourceforge.net/projects/gcviewer/files/gcviewer-1.36.jar/download
https://github.com/chewiebug/GCViewer/wiki/Changelog
ThreadLogic
ThreadLogic is an open source visual thread dump analyzer that provides an in-depth
analysis of WebLogic Server thread dumps. ThreadLogic can also merge and analyze
multiple thread dumps and provide enhanced reporting that lets you view whether
threads are progressing across thread dumps.
You run ThreadLogic, version 1.1.205, using the following command:
java -jar ThreadLogic-1.1.205.jar
You can open the thread dumps or merge thread dumps by selecting several and then
right-clicking and choosing Diff Selection. For an overview of ThreadLogic, see the
following website:
http://www.ateam-oracle.com/analyzing-thread-dumps-in-middleware-part-4-2/
B-2
C
OSM Installed Components
This appendix describes the components that are automatically configured for OSM cloud
native.
Productized Cartridges
The OSM DB Installer deploys the following cartridge:
• Job Order cartridge: Enables the job control order feature. For information about using
job orders, see OSM Modeling Guide.
WebLogic Deployments
Table C-1 lists the WebLogic deployments.
WebLogic Configuration
This section lists and describes the WebLogic configuration.
Coherence Clusters
The following coherence cluster is configured:
• osmCohClustproject-instance
where project-instance is the project and instance name of your OSM cloud native
instance.
Work Managers
See "Using Work Managers to Prioritize Work" for information about work managers.
Table C-2 lists and describes the work managers that are configured.
C-1
Appendix C
WebLogic Installed Components
JMS Servers
The following JMS server is configured:
• oms_jms_server
JMS Module
The following JMS module is configured:
• oms_jms_module: Contains the JMS system resources for OSM.
C-2
Appendix C
WebLogic Installed Components
Quotas
Quotas are used to control the allotment of system resources available to OSM destinations
(queues or topics).
C-3
Appendix C
WebLogic Installed Components
Table C-4 lists and describes the quotas that are configured.
Connection Factories
The following connection factories are configured:
• osm_connection_factory
• osmExternalClientConnectionFactory: Use this connection factory to submit OSM
Web Service and XML API requests (including order creation) from an external
system.
• oms_events_connection_factory
Destination Key
The following destination key is configured:
• osmDescendingPriorityDestinationKey
For information about configuring destination keys, see Oracle Fusion Middleware
Administration Console Online Help for Oracle WebLogic Server.
JMS Template
A JMS template for the OSM destinations is created. This template applies
recommended defaults for the following settings:
• Redelivery Limit
• Redelivery Delay
• Error Destination
The following JMS template is configured:
• osmJmsTemplate
C-4
Appendix C
WebLogic Installed Components
For information about JMS templates, see Oracle Fusion Middleware Administration Console
Online Help for Oracle WebLogic Server.
Data Sources
The following standalone data sources are configured:
• osm-app-conn-pool-0
• osm-infra-conn-pool
C-5
Appendix C
WebLogic Installed Components
Database Configuration
The OSM DB Installer creates the following database schemas:
• Core schema
• Rule engine schema
• Reporting schema
Table C-7 shows the roles and permissions that are granted to database schema
users.
C-6
Appendix C
WebLogic Installed Components
C-7